Edge in the News

Ana Gerschenfeld, PUBLICO [1.6.10]

IS THE INTERNET CHANGING THE WAY WE THINK?

Do you think the Internet has altered you mind at the neuronal, cognitive, processing, emotional levels? Yes, no, maybe, reply philosophers, scientists, writers, journalists to the Edge annual question 2010, in dozens of texts that are published online today Ana Gerschenfeld

Click here for PDF of Portuguese Original

In the summer of 2008, American writer Nicholas Carr published in the Atlantic Monthly an article under the titleIs Google making us stupid?: What the Internet is doing to our brains, in which highly criticized the Internet’s effects on our intellectual capabilities. The article had a high impact, both in the media and the blogosphere.

Edge.org – the intellectual online salon – has now expanded and deepened the debate through its traditional annual challenge to dozens of the world’s leading thinkers of science, technology, thought, arts, journalism. The 2010 question is: “How is the Internet changing the way you think?"

They reply that the Internet has made them (us) smarter, shallower, faster, less attentive, more accelerated, less creative, more tactile, less visual, more altruistic, less arrogant. That it has dramatically expanded our memory but at the same time made us the hostages of the present tense. The global web is compared to an ecosystem, a collective brain, a universal memory, a global conscience, a total map of geography and history.

One thing is certain: be they fans or critics, they all use it and they all admit that the Internet leaves no one untouched. No one can remain impervious to things such a Wikipedia or Google, no one can resist the attraction of instant, global, communication and knowledge.

More than 120 scientists, physicians, engineers, authors, artists, journalists met the challenge. Here, we present the gist some of their answers, including Nicholas Carr’s, who is also part of this online think tank founded by New-York literary agent John Brockman. If you have more time and think your attention span is up to it, we recommend you enjoy the whole scope of their length and diversity by visiting edge.org 

Who decides?

Daniel Hillis

Physicist, Computer Scientist

The real impact of the Internet is that it has changed the way we make decisions. More and more, it is not individual humans who decide, but an entangled, adaptive network of humans and machines. Although we created it, we did not exactly design it. It evolved. Our relationship to it is similar to our relationship to our biological ecosystem. We are co-dependent, and not entirely in control.

Speed of thinking

Andrian Kreye

Editor, Sueddeutsche Zeitung

If speeding up thinking continually constitutes changing the way I think, the Internet has done a marvelous job. All this might not constitute a change in thinking though. I haven't changed my mind or my convictions because of the Internet. I haven't had any epiphanies while sitting in front of a screen. The Internet so far has not given me no memorable experiences, although it might have helped to usher some along. It has always been people, places and experiences that have changed the way I think.

Facsimile of experience

Eric Fischl and April Gornik

Visual Artists

For the visual artist, seeing is essential to thought. So how has the Internet changed us visually? The changes are subtle yet profound. One loss is a sense of scale. Another is a loss of differentiation between materials. Visual information becomes based on image alone. Experience is replaced with facsimile.

Work and play

Kevin Kelly

Editor-At-Large, Wired

I am "smarter" in factuality, but my knowledge is now more fragile. Anything I learn is subject to erosion. My certainty about anything has decreased. That means that in general I assume more and more that what I know is wrong. The Internet also blurs the difference between my serious thoughts and my playful thoughts. I believe the conflation of play and work, of thinking hard and thinking playfully, is one the greatest things the Internet has done.

Digital sugar 

Esther Dyson

Former Chairman, Electronic Frontier Foundation

I love the Internet. But sometimes I think much of what we get on the Internet is empty calories. It's sugar – short videos, pokes from friends, blog posts, Twitter posts, pop-ups and visualizations… Over a long period, many of us are genetically disposed to lose our capability to digest sugar if we consume too much of it. Could that be true of information sugar as well? Will we become allergic to it even as we crave it? And what will serve as information insulin?

Mind control

Larry Sanger

Co-founder of Wikipedia

Some observers speak of how our minds are being changed by information overload, apparently despite ourselves. Former free agents are mere subjects of powerful new forces. I don't share the assumption. Do we have any choice about ceding control of the self to an increasingly compelling "Hive Mind"? Yes. And should we cede such control, or instead strive, temperately, to develop our own minds very well and direct our own attention carefully? The answer, I think, is obvious.

Outsourcing the mind

Gerd Gigerenzer

Psychologist, Max Planck Institute

We are in the process of outsourcing information storage and retrieval from mind to computer, just as many of us have already outsourced the ability of doing mental arithmetic to the pocket calculator. We may loose some skills in this process, but the Internet is also teaching us new skills for accessing information. The Internet is a kind of collective memory, to which our minds will adapt until a new technology eventually replaces it. Then we will begin outsourcing other cognitive abilities, and hopefully, learn new ones. 

Thinking better

Stephen Kosslyn

Psychologist, Harvard University 

The Internet has extended my memory, perception, and judgment. These effects have become even more striking since I've used a smart phone. I now regularly pull out my phone to check a fact, to watch a video, and to read blogs. The downside is that when I used to have dead periods, I often would let my thoughts drift, and sometimes would have an unexpected insight or idea. Those opportunities are now fewer and farther between. But I think it's a small price to pay. I am a better thinker now than I was before I integrated the Internet into my mental and emotional processing.

 

Dramatic changes

Kai Kraus

Software Pioneer|

The Internet dramatically changed my own thinking. Not at the neuron level, but more abstractly: it completely redefined how we perceive the world and ourselves in it. But it is a double-edged sword, a yin-yang yoyo of the good, the bad and the ugly. The Net will not reach its true potential in my little lifetime. But it surely has influenced the thinking in my lifetime like nothing else ever has.

Tactile cyberworld

James O'Donnell

Classicist, Georgetown University

My fingers have become part of my brain. Just for myself, just for now, it's my fingers I notice. Ask me a good question today, and if I am away from my desk, I pull out my Blackberry – it's a physical reaction, a gut feeling that I need to start manipulating the information at my fingertips. At my desktop, it's the same pattern: the sign of thinking is that I reach for the mouse and start "shaking it loose". My eyes and hands have already learned to work together in new ways with my brain in a process that really is a new way of thinking for me. The information world is more tactile than ever before.

 

Promiscuity

Seth Lloyd

Quantum Mechanical Engineer, MIT

I think less. When I do think, I am lazier. For hundreds of millions of years, sex was the most efficient method for propagating information of dubious provenance: the origins of all those snippets of junk DNA are lost in the sands of reproductive history. The world-wide Web has usurped that role. A single illegal download can propagate more parasitic bits of information than a host of mating Tse Tse flies. For the moment, however, the ability of the Internet to propagate information promiscuously is largely a blessing. What will happen later? Don't ask me. By then, I hope not to be thinking at all.

Same old brain

Nicholas Christakis

Physician and Social Scientist, Harvard University

The Internet is no different than previous (equally monumental) brain-enhancing technologies such as books or telephony, and I doubt whether books and telephony have changed the way I think, in the sense of actually changing the way my brain works. In fact, I would say that it is much more correct to say that our thinking gave rise to the Internet than that the Internet gave rise to our thinking. There is no new self. There are no new others. And so there is no new brain, and no new way of thinking. We are the same species after the Internet as before.

The map

Neri Oxman

Architect, Researcher, MIT

The Internet has become a map of the world, both literally and symbolically, as it traces in an almost 1:1 ratio every event that has ever taken place. As we are fed with information, thus withers the very power of perception, and the ability to engage in abstract and critical thought atrophies. Where are we heading in the age of the Internet? Are we being victimized by our own inventions?

Hunter-gatherers

Lee Smolin

Physicist, Perimeter Institute

The Internet hasn't, so far, changed how we think. But it has radically altered the contexts in which we think and work. We used to cultivate thought, now we have become hunter-gatherers of images and information. Perhaps when the Internet has been soldered into our glasses or teeth, with the screen replaced by a laser making images directly on our retinas, there will be deeper changes. 

The Matrix

John Markoff

Journalist, The New York Times

Not only have I been transformed into an Internet pessimist, but recently the Net has begun to feel downright spooky. Doesn't the Net seem to have a mind of its own? Will we all be assimilated, or have we been already? Wait! Stop me! That was The Matrix wasn't it?

The upload has begun

Sam Harris

Neuroscientist, The Reason Project

It is now a staple of scientific fantasy, or nightmare, to envision that human minds will one day be uploaded onto a vast computer network like the Internet. I notice that the prophesied upload is slowly occurring in my own case. This migration to the Internet now includes my emotional life. Increasingly, I develop relationships with other scientists and writers that exist entirely online. Almost every sentence we have ever exchanged exists in my Sent Folder. Our entire relationship is, therefore, searchable. I have many other friends and mentors who exist for me in this way, primarily as email correspondents.

Parallel Lives 

Linda Stone

Former Executive at Apple and Microsoft

Before the Internet, I made more trips to the library and more phone calls. I read more books and my point of view was narrower and less informed. I walked more, biked more, hiked more, and played more. I made love more often. The more I've loved and known it, the clearer the contrast, the more intense the tension between a physical life and a virtual life. The sense of contrast between my online and offline lives has turned me back toward prizing the pleasures of the physical world. I now move with more resolve between each of these worlds, choosing one, then the other – surrendering neither.

The Dumb Butler 

Joshua Greene

Cognitive Neuroscientist and Philosopher, Harvard University

The Internet hasn't changed the way we think anymore than the microwave oven has changed the way we digest food. The Internet has provided us with unprecedented access to information, but it hasn't changed what we do with it once it's made it into our heads. This is because the Internet doesn't (yet) know how to think. We still have to do it for ourselves, and we do it the old-fashioned way. Until then, the Internet will continue to be nothing more, and nothing less, than a very useful, and very dumb, butler.

The end of experience

Scott Sampson 

Dinosaur paleontologist

What I want to know how the Internet changes the way the children of the Internet age think. It seems likely that a lifetime of daily conditioning dictated by the rapid flow of information across glowing screens will generate substantial changes in brains, and thus thinking. But I have a larger fear, one rarely mentioned – the extinction of experience, the loss of intimate experience with the natural world. Any positive outcome will involve us turning off the screens and spending significant time outside interacting with the real world, in particular the nonhuman world.

Rewired

Haim Harari

Physicist, former President, Weizmann Institute of Science

There are three clear changes that are palpable. The first is the increasing brevity of messages. The second is the diminishing role of factual knowledge, in the thinking process. The third is in the entire process of teaching and learning: it may take another decade or two, but education will never be the same. An interesting follow-up issue, to this last comment, is the question whether the minds and brains of children will be physically "wired" differently than those of earlier generations. I tend to speculate in the affirmative.

The Price of omniscience

Terrence Sejnowski

Computational Neuroscientist, Salk Institute

Experiences have long-term effects on the brain's structure and function. Are the changes occurring in your brain as you interact with the Internet good or bad for you? Gaining knowledge and skills should benefit survival, but not if you spend all of your time immersed in the Internet. The intermittent rewards can become addictive. The Internet, however, has not been around long enough, and is changing too rapidly, to know what the long-term effects will be on brain function. What is the ultimate price for omniscience?

Thinking like the Internet 

Nigel Goldenfeld

Physics, University of Illinois at Urbana-Champaign

I don't believe my way of thinking was changed by the Internet until around 2000. Why not? The answer, I suspect, is the fantastic benefit that comes from massive connectivity and the resulting emergent phenomena. Back in those days, the Internet was linear, predictable, and boring. It never talked back. But I'm starting to think like the Internet. My thinking is better, faster, cheaper and more evolvable because of the Internet. And so is yours. You just don't know it yet.

Greatest Detractor 

Leo Chalupa

Neurobiologist, University of California, Davis

The Internet is the greatest detractor to serious thinking since the invention of television. Moreover, while the Internet provides a means for rapidly communicating with colleagues globally, the sophisticated user will rarely reveal true thoughts and feelings in such messages. Serious thinking requires honest and open communication and that is simply untenable on the Internet by those that value their professional reputation. 

The Collective Brain

Matt Ridley

Science Writer 

Cultural and intellectual evolution depends on sex just as much as biological evolution does. Sex allows creatures to draw upon mutations that happen anywhere in their species. The Internet allows people to draw upon ideas that occur to anybody in the world. This has changed the way I think about human intelligence. The Internet is the latest and best expression of the collective nature of human intelligence.

Memory sharpener

Tom Standage 

Editor, The Economist

The Internet has not changed the way I think. What the Internet has done, however, is sharpen my memory. A quick search with a few well chosen keywords is usually enough to turn a decaying memory of a half-forgotten item into perfect recall of the information in question. This is useful now, but I expect it to become much more useful as I get older and my memory starts to become less reliable. Perhaps the same will be true of the way the Internet enhances our mental faculties in the years to come.

 

People in my head

Eva Wisten

Journalist, SEED Media Group

 

The Internet might not be changing how I think, but it does some of my thinking for me. And above all, the Internet is changing how I see myself. As real world activity and connections continue to be what matters most to me, the Internet, with its ability to record my behavior, is making it clearer that I am, in thought and in action, the sum of the thoughts and actions of other people to a greater extent than I have realized. 

 

Internet natives

Alison Gopnik

Psychologist, UC, Berkeley

The Internet has made my experience more fragmented, splintered and discontinuous. But I'd argue that's because I have mastered the Internet as an adult. So children who grow up with the Web will master it in a way that will feel as whole and natural as reading feels to us. But that doesn't mean that their experience and attention won't be changed by the Internet.

 

Repetition versus truth

Daniel Haun

Cognitive Anthropologist, Max Planck Institute

There is a human tendency to mistake repetition for truth. How do you find the truth on the Internet? You use a search engine, which determines a page's relevance by how many other relevant pages link to it. Repetition, not truth. Hence, the Internet does just what you would do. It isn't changing the structure of your thinking, because it resembles it.

Exaggeration

Steven Pinker

Cognitive Psychologist, Harvard University

I'm skeptical of the claim that the Internet is changing the way we think. To be sure, many aspects of the life of the mind have been affected by the Internet. Our physical folders, mailboxes, bookshelves, spreadsheets, documents, media players, and so on have been replaced by software equivalents, which has altered our time budgets in countless ways. But to call it an alternation of "how we think" is, I think, an exaggeration.

Mental Clock

Stanislas Dehaene

Neuroscientist, Collège de France

Few people pay attention to a fundamental aspect of the Internet revolution: the shift in our notion of time. Human life used to be a quiet routine that has become radically disrupted, for better or for worse. Do we aim for ever faster intellectual collaboration? Or for ever faster exploitation that will allow us to get good night's sleep while others do the dirty work? Our basic political options remain essentially unchanged.

Connecting is disconnecting 

Marc Hauser

Psychologist and Biologist, Harvard University

Our capacity to connect through the Internet may be breeding a generation of social degenerates. I do not have Webophobia, greatly profit from the Internet as a consummate informavore, and am a passionate one-click Amazonian. But our capacity to connect is causing a disconnect. Perhaps Web 3.0 will enable a function to virtually hold hands with our Twitter friends.

Diminished attention

Nicholas Carr

Author

My own reading and thinking habits have shifted dramatically since I first logged onto the Web fifteen or so years ago. I now do the bulk of my reading and researching online. And my brain has changed as a result. Even as I've become more adept at navigating the rapids of the Net, I have experienced a steady decay in my ability to sustain my attention. My own experience leads me to believe that what we stand to lose will be at least as great as what we stand to gain.

Diet-Internet

Rodney Brooks

Computer Scientist, MIT

The Internet is stealing our attention. Unfortunately, a lot of what it offers is merely good sugar-filled carbonated sodas for our mind. We, or at least I, need tools that will provide us with the Diet-Internet, the version that gives us the intellectual caffeine that lets us achieve what we aspire, but which doesn't turn us into hyper-active intellectual junkies.

People Can Be Nice

Paul Bloom

Psychologist, Yale University

The proffering of information on the Internet is the extension of this everyday altruism. It illustrates the extent of human generosity in our everyday lives and also shows how technology can enhance and expand this positive human trait, with real beneficial results. People have long said that the Web makes us smarter; it might make us nicer as well.

A miracle and a curse

Ed Regis 

Science writer 

The Internet is not changing the way I think (nor the way anyone else thinks, either). I continue to think the same way I always thought: by using my brain, my senses, and by considering the relevant information. I mean, how else can you think? What it has changed for me is my use of time. The Internet is simultaneously the world's greatest time-saver and the greatest time-waster in history.

Ana Gerschenfeld

Newsweek [1.6.10]

Shortened attention span. Less interest in reflection and introspection. Inability to engage in in-depth thought. Fragmented, distracted thinking.

The ways the Internet supposedly affects thought are as apocalyptic as they are speculative, since all the above are supported by anecdote, not empirical data. So it is refreshing to hear how 109 philosophers, neurobiologists, and other scholars answered, "How is the Internet changing the way you think?" That is the "annual question" at the online salon edge.org, where every year science impresario, author, and literary agent John Brockman poses a puzzler for his flock of scientists and other thinkers.

Although a number of contributors drivel on about, say, how much time they waste on e-mail, the most striking thing about the 50-plus answers is that scholars who study the mind and the brain, and who therefore seem best equipped to figure out how the Internet alters thought, shoot down the very idea. "The Internet hasn't changed the way we think," argues neuroscientist Joshua Greene of Harvard. It "has provided us with unprecedented access to information, but it hasn't changed what [our brains] do with it." Cognitive psychologist Steven Pinker of Harvard is also skeptical. "Electronic media aren't going to revamp the brain's mechanisms of information processing," he writes. "Texters, surfers, and twitterers" have not trained their brains "to process multiple streams of novel information in parallel," as is commonly asserted but refuted by research, and claims to the contrary "are propelled by ... the pressure on pundits to announce that this or that 'changes everything.' "

These changes in what people think are accompanied by true changes in the process of thinking—little of it beneficial. The ubiquity of information makes us "less likely to pursue new lines of thought before turning to the Internet," writes psychologist Mihaly Csikszentmihalyi of Claremont Graduate University. "Result: less sustained thought?" And since online information "is often decontextualized," he adds, it "satisfies immediate needs at the expense of deeper understanding (result: more superficial thought?)." Because facts are a click away, writes physicist Haim Harari, "the Internet allows us to know fewer facts ... reducing their importance as a component" of thought. That increases the importance of other components, he says, such as correlating facts, "distinguishing between important and secondary matters, knowing when to prefer pure logic and when to let common sense dominate." By flooding us with information, the Internet also "causes more confidence and illusions of knowledge" (Nassim Taleb of MIT, author of  The Black Swan), but makes our knowledge seem "more fragile," since "for every accepted piece of knowledge I find, there is within easy reach someone who challenges the fact" (Kevin Kelly, cofounder of Wired).And yet. Many scholars do believe the Internet alters thinking, and offer provocative examples of how—many of them surprisingly dystopian. Communications scholar Howard Rheingold believes the Internet fosters "shallowness, credulity, distraction," with the result that our minds struggle "to discipline and deploy attention in an always-on milieu." (Though having to make a decision every time a link appears—to click or not to click?—may train the mind's decision-making networks.) The Internet is also causing the "disappearance of retrospection and reminiscence," argues Evgeny Morozov, an expert on the Internet and politics. "Our lives are increasingly lived in the present, completely detached even from the most recent of the pasts ... Our ability to look back and engage with the past is one unfortunate victim." Cue the Santayana quote.

Even more intriguing are the (few) positive changes in thinking the Internet has caused. The hyperlinked Web helps us establish "connections between ideas, facts, etc.," suggests Csikszentmihalyi. "Result: more integrated thought?" For Kelly, the uncertainty resulting from the ubiquity of facts and "antifacts" fosters "a kind of liquidity" in thinking, making it "more active, less contemplative." Science historian George Dyson believes the Internet's flood of information has altered the process of creativity: what once required "collecting all available fragments of information to assemble a framework of knowledge" now requires "removing or ignoring unnecessary information to reveal the shape of knowledge hidden within." Creativity by destruction rather than assembly.

Sharon Begley is NEWSWEEK's science editor and author of The Plastic Mind: New Science Reveals Our Extraordinary Potential to Transform Ourselves andTrain Your Mind, Change Your Brain: How a New Science Reveals Our Extraordinary Potential to Transform Ourselves.

OnPoint [1.6.10]

Every year, ideas impresario John Brockman asks one hundred super-bright minds one big question, and shares their answers with the world.

This time out, the question was about science and what big development in our lifetimes will change the world. What will change everything?

The answers — from Craig Venter, Richard Dawkins, Lisa Randall, Irene Pepperberg, and many more — range from mind-reading to space elevators to cross-species breeding. Yikes.

This hour, On Point: “This will change everything…”

You can join the conversation. Tell us what you think — here on this page, onTwitter, and on Facebook.

Guests:

John Brockman joins us from New York. He’s the founder of the Edge Foundation, which runs the science and technology websiteEdge.org. Every year, Edge asks scientists and thinkers a “big question,” and publishes the answers in a book, which Brockman edits. The latest, just out, is “This Will Change Everything: Ideas That Will Shape the Future.” It’s based on the 2009 question: “What game-changing scientific ideas and developments do you expect to live to see?” The 2010 question, “How is the internet changing the way you think?,” has just been posted.

From Cambridge, Mass., we’re joined by Frank Wilczek, Nobel Prize-winning theoretical physicist and professor of physics at MIT. His response to the 2009 Edge question discusses coming technological advances resulting from deeper understanding of quantum physics. He’s the author of several books on physics for the lay reader, most recently “The Lightness of Being: Mass, Ether, and the Unification of Forces.”

And from Berkeley, Calif., we’re joined by Alison Gopnik, professor of psychology and affiliate professor of philosophy at UC-Berkeley and an expert on cognitive and language development. Herresponse to the 2009 Edge question discusses the extension of human childhood.  Her latest book is “The Philosophical Baby: What Children’s Minds Tell us About Truth, Love, and the Meaning of Life.”

CHICAGO SUN-TIMES [1.1.10]

I flunked a physics test so badly as a college freshman that the only reason I scored any points was I spelled my name right.

Such ignorance, along with studied avoidance of physics and math since college, didn’t lessen my enjoyment of This Will Change Everything, a provocative, demanding clutch of essays covering everything from gene splicing to global warming to intelligence, both artificial and human, to immortality.

Edited by John Brockman, a literary agent who founded the Edge Foundation, this is the kind of book into which one can dip at will. Approaching it in a linear fashion might be frustrating because it is so wide-ranging. ...

...Overall, this will appeal primarily to scientists and academicians. But the way Brockman interlaces essays about research on the frontiers of science with ones on artistic vision, education, psychology and economics is sure to buzz any brain.

Stewart Brand, the father of the Whole Earth Catalog, a kind of hippie precursor of hypertext and intermedia (the last term is a Brockman coinage), calls Brockman "one of the great intellectual enzymes of our time” atwww.edge.org, Brockman’s Web site. Brockman clearly is an agent provocateur of ideas. Getting the best of them to politicians who can use them to execute positive change is the next step.

ARTS & LETTERS DAILY [12.31.09]

Articles of Note: John Brockman’s Edge question for 2010 asks over a hundred intellectuals,"Is the Internet changing the way you think?"... more»

NEIMAN REPORTS [12.30.09]

As each new year approaches, John Brockman, founder of Edge, an online publication, consults with three of the original members of Edge—Stewart Brand, founder and editor of Whole Earth Catalog; Kevin Kelly, who helped to launch Wired in 1993 and wrote “What Technology Wants,” a book to be published in October (Viking Penguin); and George Dyson, a science historian who is the author of several books including “Darwin Among the Machines.” Together they create the Edge Annual Question—which Brockman then sends out to the Edge list to invite responses. He receives these commentaries by e-mail, which are then edited. Edge is a read-only site. There is no direct posting nor is Edge open for comments.

Brockman has been asking an Edge Annual Question for the past 13 years. In this essay, he explains what makes a question a good one to ask and shares some responses to this year’s question: “How is the Internet changing the way you think?” 

Read the responses in their entirety » 

It’s not easy coming up with a question. As the artist James Lee Byars used to say: “I can answer the question, but am I bright enough to ask it?” Edge is a conversation. We are looking for questions that inspire answers we can’t possibly predict. Surprise me with an answer I never could have guessed. My goal is to provoke people into thinking thoughts that they normally might not have. 

The art of a good question is to find a balance between abstraction and the personal, to ask a question that has many answers, or at least one for which you don’t know the answer. It’s a question distant enough to encourage abstractions and not so specific that it’s about breakfast. A good question encourages answers that are grounded in experience but bigger than that experience alone. 

Before we arrived at the 2010 question, we went through several months of considering other questions. Eventually I came up with the idea of asking how the Internet is affecting the scientific work, lives, minds and reality of the contributors. Kevin Kelly responded:


John, you pioneered the idea of asking smart folks what question they are asking themselves. Well I’ve noticed in the past few years there is one question everyone on your list is asking themselves these days and that is, is the Internet making me smarter or stupid? Nick Carr tackled the question on his terms, but did not answer it for everyone. In fact, I would love to hear the Edge list tell me their version: Is the Internet improving them or improving their work, and how is it changing how they think? I am less interested in the general “us” and more interested in the specific “you”—how it is affecting each one personally. Nearly every discussion I have with someone these days will arrive at this question sooner or later. Why not tackle it head on?

And so we did.

Yet, we still had work to do in framing our question. When people respond to “we” questions, their words tend to resemble expert papers, public pronouncements, or talks delivered from a stage. “You” leads us to share specifics of our lived experience. The challenge then is to not let responses slip into life’s more banal details. 

For us, discussion revolved around whether we’d ask “Is the Internet changing the way we think?” or probe this topic with a “you” focused question. Steven Pinker, Harvard research psychologist, author of “The Language Instinct” and “The Blank Slate,” and one of several distinguished scientists I consult, advised heading in the direction of “us.”


I very much like the idea of the Edge Question, but would suggest one important change—that it be about “us,” not “me.” The “me” question is too easy—if people really thought that some bit of technology was making their minds or their lives worse, they could always go back to the typewriter, or the Britannica, or the US Postal Service. The tough question is “us’”if every individual makes a choice that makes him or her better off, could there be knock-on effects that make the culture as a whole worse off (what the economists call “externalities”)?

Ultimately it’s my call so I decided to go with the “you” question in the hope that it would attract a wider range of individualistic responses. In my editorial marching orders to contributors, I asked them to think about the Internet—a much bigger subject than the Web, recalling that in 1996 computer scientist and visionary W. Daniel Hillis presciently observed the difference:

Many people sense this, but don’t want to think about it because the change is too profound. Today, on the Internet the main event is the Web. A lot of people think that the Web is the Internet, and they’re missing something. The Internet is a brand-new fertile ground where things can grow, and the Web is the first thing that grew there. But the stuff growing there is in a very primitive form. The Web is the old media incorporated into the new medium. It both adds something to the Internet and takes something away.

Early Responders

Framing the question and setting a high bar for responses is critical. Before launching the question to the entire Edge list, I invited a dozen or so people who I believed would have something interesting to say; their responses would seed the site and encourage the wider group to think in surprising ways. Here are some of these early responses:

  • Playwright Richard Foreman asks about the replacement of complex inner density with a new kind of self evolving under the pressure of information overload and the technology of the instantly available. Is it a new self? Are we becoming Pancake People—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button?

  • Technology analyst Nicholas Carr, who wrote The Atlantic cover story, “Is Google Making Us Stupid?,” asks whether the use of the Web made it impossible for us to read long pieces of writing.

  • Social software guru Clay Shirky says the answer is “ ‘too soon to tell.’ This isn’t because we can’t see some of the obvious effects already, but because the deep changes will be manifested only when new cultural norms shape what the technology makes possible. ... The Internet’s primary effect on how we think will only reveal itself when it affects the cultural milieu of thought, not just the behavior of individual users.”

  • Web 2.0 pioneer Tim O’Reilly ponders if ideas themselves are the ultimate social software. Do they evolve via the conversations we have with each other, the artifacts we create, and the stories we tell to explain them?

  • Stewart Brand, founder of Whole Earth Catalog, cannot function without the major players in his social extended mind—his guild. “How I think is shaped to a large degree by how they think,” he writes. “Thanks to my guild’s Internet-mediated conversation, my neuronal thinking is enhanced immeasurably by our digital thinking.”

  • Hillis goes a step further by asking if the Internet will, in the long run, arrive at a much richer infrastructure in which ideas can potentially evolve outside of human minds. In other words, can we change the way the Internet thinks?

The Conversation

The 2010 question elicited, in all, 172 essays that comprised a 132,000-word manuscript published online by Edge in January.  Kelly speaks about a new type of mind, amplified by the Internet, evolving, and able to start a new phase of evolution outside of the body. In “Net Gain,” evolutionary biologist Richard Dawkins looks 40 years into the future when “retrieval from the communal exosomatic memory will become dramatically faster, and we shall rely less on the memory in our skulls.” Nassim Taleb, author of “The Black Swan,” writes about “The Degradation of Predictability—and Knowledge” as he asks us to “consider the explosive situation: More information (particularly thanks to the Internet) causes more confidence and illusions of knowledge while degrading predictability.”  Nick Bilton, lead writer of The New York Times’s Bits blog, notes that “[the] Internet is not changing how we think. Instead, we are changing how the Internet thinks.” Actor Alan Alda worries about “[speed] plus mobs. A scary combination.” He wonders, “Is there an algorithm perking somewhere in someone’s head right now that can act as a check against this growing hastiness and mobbiness?” New York Times columnist Virginia Heffernanwrites that “we must keep on reading and not mistake new texts for new worlds, or new forms for new brains.” Numerous artists responded in enlightening ways, as their evocative headlines suggest:

My Favorites

I enjoyed the juxtaposition of responses by psychologist Steven Pinker, “Not At All,” and Chinese artist and cultural activist Ai Weiwei, “I Only Think on the Internet.” The response I most admired is George Dyson’s “Kayaks vs. Canoes.” It is a gem:

 

In the North Pacific Ocean, there were two approaches to boatbuilding. The Aleuts (and their kayak-building relatives) lived on barren, treeless islands and built their vessels by piecing together skeletal frameworks from fragments of beach-combed wood. The Tlingit (and their dugout canoe-building relatives) built their vessels by selecting entire trees out of the rainforest and removing wood until there was nothing left but a canoe.

The Aleut and the Tlingit achieved similar results—maximum boat/minimum material—by opposite means. The flood of information unleashed by the Internet has produced a similar cultural split. We used to be kayak builders, collecting all available fragments of information to assemble the framework that kept us afloat. Now, we have to learn to become dugout-canoe builders, discarding unnecessary information to reveal the shape of knowledge hidden within.

I was a hardened kayak builder, trained to collect every available stick. I resent having to learn the new skills. But those who don’t will be left paddling logs, not canoes.

What do you think?

New Scientist [12.1.09]

"a stellar cast of intellectuals ... a stunning array of responses"

HOLIDAY BOOKS: This Will Change Everything edited by John Brockman; 
John Brockman's annual question draws a bewildering array of responses from a stellar cast of intellectuals

by Michael Bond

LITERARY agent John Brockman assembles a stellar cast of intellectuals each year to answer a boundary-pushing question. His latest poser — "What game-changing scientific ideas and developments do you expect to live to see?" — has drawn a stunning array of responses, from nuclear terrorism to in-vitro meat.

Some ideas are predictable (immortality, intelligent robots, designer children), some world-saving if they happened (oil we can grow) and some we'd be better off without (neuro-cosmetics). Many are self-indulgent technological fantasies. With contributions from Ian McEwan, Steven Pinker, Lee Smolin, Craig Venter, Richard Dawkins and 130 others of their ilk, the book is like an intellectual lucky dip.

Perfect for: anyone who wants to know what the big thinkers will be chewing on in 2010.

SEED [11.30.09]

"brilliant ... captivating ... overwhelming"

Books to read (and give) now
SEED PICKS DECEMBER 1, 2009

This Will Change Everything: Ideas That Will Shape the Future
Edited by John Brockman (Harper Perennial)

The latest prophetic collection from John Brockman of Edge.org invites scores of the world's most brilliant thinkers, including Richard Dawkins, Lisa Randall, and Brian Eno, to predict what game-changing events will occur in their lifetimes. Their speculations run the existential gamut, as some predict deliberate nuclear disaster or accidental climatic apocalypse and others foresee eternal life, unlimited prosperity, and boundless happiness. Between such extremes of heaven and hell lie more ambiguous visions: An end to forgetting, the creation of intelligent machines, and cosmetic brain surgery, to name a few. Pouring over these pages is like attending a dinner party where every guest is brilliant and captivating and only wants to speak with you—overwhelming, but an experience to savor.

Andrian Kreye, Suddeutsche Zeitung [11.17.09]

Wenn der Kopf im Internet nicht mehr mitkommt: Frank Schirrmachers Buch "Payback" bringt die digitale Debatte zwar auf den neuesten Stand, aber nicht weiter.

Es gibt in der industrialisierten Welt kein Land, in dem die Debatte um den Einfluss des Internets auf die Gesellschaft mit so vielen dogmatischen Verkrustungen und ideologischen Verschärfungen geführt wird, wie in Deutschland. Die digitale Kluft, die sich durch unser Land zieht, verläuft meist entlang der Generationengrenze zwischen "Digital Natives" und "Digital Immigrants", also zwischen jenen, die mit dem Internet aufgewachsen sind, und jenen, die den digitalen Technologien erst als Erwachsene begegneten.

Bild vergrößern

Schirrmachers Stärke ist es, den intellektuellen Wissensdurst mit den Jagdinstinkten eines Boulevardjournalisten zu verbinden. (© Foto: dpa)

Dabei ist das Thema längst größer als der knickrige Streit um alte und neue Mediengewohnheiten und Urheberrechtsfragen oder die politische Panikmache vor Amokspielen und Kinderpornos, auf die die digitalen Debatten in Deutschland meist hinauslaufen. Das neue Buch des FAZ-Herausgebers und Feuilletonisten Frank Schirrmacher "Payback" (Blessing Verlag München, 2009, 240 Seiten, 17,95 Euro) erweitert die Debatte nun endlich um kluge Gedanken. Auch wenn der Untertitel "Warum wir im Informationszeitalter gezwungen sind zu tun, was wir nicht tun wollen, und wie wir die Kontrolle über unser Denken zurückgewinnen" zunächst nach der üblichen Mischung aus Kulturpessimismus und Selbsthilfe klingt.

Unterschätzen darf man den Untertitel nicht. Schirrmachers publizistische Stärke ist es, den intellektuellen Wissensdurst eines Universalgelehrten mit den Jagdinstinkten eines Boulevardjournalisten zu verbinden. Das macht den Konkurrenzkampf mit ihm so sportlich und seine Bücher und Debattenanstöße zu Punktlandungen im Zeitgeist. Dass er dabei oft mit Ängsten spielt, wie der Angst vor der Überalterung der Gesellschaft in seinem Bestseller "Das Methusalem-Komplott" oder der Furcht vor der sozialen Entwurzelung in "Minimum", ist seinem Boulevard-Instinkt geschuldet, der solche Ängste schon früh aufspüren und in einen Kontext setzen kann.

Druck der sozialen Verpflichtungen

Auch "Payback" verkauft sich als Begleitbuch zu aktuellen Ängsten. Schirrmacher greift jenes Gefühl der digitalen Überforderung auf, das sich nicht nur in Deutschland, sondern in allen digitalisierten Ländern breitmacht. Denn die Siegeszüge dreier digitaler Technologien haben in den vergangenen beiden Jahren die Grenzen der digitalen Aufnahmebereitschaft ausgereizt.

Da war zunächst das iPhone mit seinen inzwischen rund 20000 "Apps" - Programmen, die aus dem Apple-Handy einen Supercomputer machen. Dann erhöhte die Netzwerkseite Facebook den Druck der sozialen Verpflichtungen im Netz ins Unermessliche. Und schließlich öffnete der Kurznachrichtendienst Twitter die Schleusen für eine Informationsflut, die sich nur noch mit einer Palette von Hilfsprogrammen bewältigen lässt. Längst gibt es in Europa und Amerika unzählige Artikel und Bücher, die diese Überforderung thematisieren.

"Mein Kopf kommt nicht mehr mit", heißt auch das erste Kapitel von "Payback". Da beschreibt Schirrmacher, stellvertretend für viele, seine ganz persönliche kognitive Krise, in die ihn die digitalen Datenmengen gestürzt haben. Wie ein Fluglotse fühle er sich, immer bemüht, einen Zusammenstoß zu vermeiden, immer in Sorge, das Entscheidende übersehen zu haben. Mehr als ein Lassowurf ist dieser Einstieg nicht, denn letztlich führt er über den Identifikationsmoment nur in den ersten der beiden Teile des Buches ein. Und da geht es um mehr.

Lesen Sie auf Seite 2, wie es im zweiten Teil von "Payback" weitergeht.

In diesem Artikel:

  1. Sie lesen jetztDie Ich-Erschöpfung
  2. Digitale Zukunft

(Sie sind jetzt aufSeite 1 von 2) nächste Seite

Frank Schirrmacher, THE HUFFINGTON POST [11.14.09]

The conversation was in English, Schirrmacher's second language. Rather than edit the piece for grammar, and risk losing the spontaneity of the conversation, I present it here -- for the most part -- verbatim. -- John Brockman]

The question I am asking myself arose through work and through discussion with other people, and especially watching other people, watching them act and behave and talk, was how technology, the Internet and the modern systems, has now apparently changed human behavior, the way humans express themselves, and the way humans think in real life. So I've profited a lot from Edge.

We are apparently now in a situation where modern technology is changing the way people behave, people talk, people react, people think, and people remember. And you encounter this not only in a theoretical way, but when you meet people, when suddenly people start forgetting things, when suddenly people depend on their gadgets, and other stuff, to remember certain things. This is the beginning, its just an experience. But if you think about it and you think about your own behavior, you suddenly realize that something fundamental is going on. There is one comment onEdge which I love, which is in Daniel Dennett's response to the 2007 annual question, in which he said that we have a population explosion of ideas, but not enough brains to cover them.

As we know, information is fed by attention, so we have not enough attention, not enough food for all this information. And, as we know -- this is the old Darwinian thought, the moment when Darwin started reading Malthus -- when you have a conflict between a population explosion and not enough food, then Darwinian selection starts. And Darwinian systems start to change situations. And so what interests me is that we are, because we have the Internet, now entering a phase where Darwinian structures, where Darwinian dynamics, Darwinian selection, apparently attacks ideas themselves: what to remember, what not to remember, which idea is stronger, which idea is weaker.

Here European thought is quite interesting, our whole history of thought, especially in the eighteenth, nineteenth, and twentieth centuries, starting from Kant to Nietzsche. Hegel for example, in the nineteenth century, where you said which thought, which thinking succeeds and which one doesn't. We have phases in the nineteenth century, where you could have chosen either way. You could have gone the way of Schelling, for example, the German philosopher, which was totally different to that of Hegel. And so this question of what survives, which idea survives, and which idea drowns, which idea starves to death, is something which, in our whole system of thought, is very, very known, and is quite an issue. And now we encounter this structure, this phenomenon, in everyday thinking.

It's the question: what is important, what is not important, what is important to know? Is this information important? Can we still decide what is important? And it starts with this absolutely normal, everyday news. But now you encounter, at least in Europe, a lot of people who think, what in my life is important, what isn't important, what is the information of my life. And some of them say, well, it's in Facebook. And others say, well, it's on my blog. And, apparently, for many people it's very hard to say it's somewhere in my life, in my lived life.

Of course, everybody knows we have a revolution, but we are now really entering the cognitive revolution of it all. In Europe, and in America too -- and it's not by chance -- we have a crisis of all the systems that somehow are linked to either thinking or to knowledge. It's the publishing companies, it's the newspapers, it's the media, it's TV. But it's as well the university, and the whole school system, where it is not a normal crisis of too few teachers, too many pupils, or whatever; too small universities; too big universities.

Now, it's totally different. When you follow the discussions, there's the question of what to teach, what to learn, and how to learn. Even for universities and schools, suddenly they are confronted with the question how can we teach? What is the brain actually taking? Or the problems which we have with attention deficit and all that, which are reflections and, of course, results, in a way, of the technical revolution?

Gerd Gigerenzer, to whom I talked and who I find a fascinating thinker, put it in such a way that thinking itself somehow leaves the brain and uses a platform outside of the human body. And that's the Internet and it's the cloud. And very soon we will have the brain in the cloud. And this raises the question of the importance of thoughts. For centuries, what was important for me was decided in my brain. But now, apparently, it will be decided somewhere else.

The European point of view, with our history of thought, and all our idealistic tendencies, is that now you can see -- because they didn't know that the Internet would be coming, in the fifties or sixties or seventies -- that the whole idea of the Internet somehow was built in the brains, years and decades before it actually was there, in all the different sciences. And when you see how the computer -- Gigerenzer wrote a great essay about that -- how the computer at first was somehow isolated, it was in the military, in big laboratories, and so on. And then the moment the computer, in the seventies and then of course in the eighties, was spread around, and every doctor, every household had a computer, suddenly the metaphors that were built in the fifties, sixties, seventies, then had their triumph. And so people had to use the computer. As they say, the computer is the last metaphor for the human brain; we don't need any more. It succeeded because the tool shaped the thought when it was there, but all the thinking, like in brain sciences and all the others, had already happened, in the sixties, seventies, fifties even.

But the interesting question is, of course, the Internet -- I don't know if they really expected the Internet to evolve the way it did -- I read books from the nineties, where they still don't really know that it would be as huge as it is. And, of course, nobody predicted Google at that time. And nobody predicted the Web.

Now, what I find interesting is that if you see the computer and the Web, and all this, under the heading of "the new technologies," we have, in the late nineteenth century, this big discussion about the human motor. The new machines in the late nineteenth century required that the muscles of the human being should be adapted to the new machines. Especially in Austria and Germany, we have this new thinking, where people said, first of all, we have to change muscles. The term "calories" was invented in the late nineteenth century, in order to optimize the human work force.

Now, in the twenty-first century, you have all the same issues, but now with the brain, what was the adaptation of muscles to the machines, now under the heading of multitasking -- which is quite a problematic issue. The human muscle in the head, the brain, has to adapt. And, as we know from just very recent studies, it's very hard for the brain to adapt to multitasking, which is only one issue. And again with calories and all that. I think it's very interesting, the concept -- again, Daniel Dennett and others said it -- the concept of the informavores, the human being as somebody eating information. So you can, in a way, see that the Internet and that the information overload we are faced with at this very moment has a lot to do with food chains, has a lot to do with food you take or not to take, with food which has many calories and doesn't do you any good, and with food that is very healthy and is good for you.

The tool is not only a tool, it shapes the human who uses it. We always have the concept, first you have the theory, then you build the tool, and then you use the tool. But the tool itself is powerful enough to change the human being. God as the clockmaker, I think you said. Then in the Darwinian times, God was an engineer. And now He, of course, is the computer scientist and a programmer. What is interesting, of course, is that the moment neuroscientists and others used the computer, the tool of the computer, to analyze human thinking, something new started.

The idea that thinking itself can be conceived in technical terms is quite new. Even in the thirties, of course, you had all these metaphors for the human body, even for the brain; but, for thinking itself, this was very, very late. Even in the sixties, it was very hard to say that thinking is like a computer.

You had once in Edge, years ago, a very interesting talk with Patty Maes on "Intelligence Augmentation" when she was one of the first who invented these intelligent agents. And there, you and Jaron Lanier, and others, asked the question about the concept of free will. And she explained it and it wasn't that big an issue, of course, because it was just intelligent agents like the ones we know from Amazon and others. But now, entering real-time Internet and all the other possibilities in the near future, the question of predictive search and others, of determinism, becomes much more interesting. The question of free will, which always was a kind of theoretical question -- even very advanced people said, well, we declare there is no such thing as free will, but we admit that people, during their childhood, will have been culturally programmed so they believe in free will.

But now, when you have a generation -- in the next evolutionary stages, the child of today -- which are adapted to systems such as the iTunes "Genius," which not only know which book or which music file they like, and which goes farther and farther in predictive certain things, like predicting whether the concert I am watching tonight is good or bad. Google will know it beforehand, because they know how people talk about it.

What will this mean for the question of free will? Because, in the bottom line, there are, of course, algorithms, who analyze or who calculate certain predictabilities. And I'm wondering if the comfort of free will or not free will would be a very, very tough issue of the future. At this very moment, we have a new government in Germany; they are just discussing the what kind of effect this will have on politics. And one of the issues, which of course at this very moment seems to be very isolated, is the question how to predict certain terroristic activities, which they could use, from blogs -- as you know, in America, you have the same thing. But this can go farther and farther.

The question of prediction will be the issue of the future and such questions will have impact on the concept of free will. We are now confronted with theories by psychologist John Bargh and others who claim there is no such thing as free will. This kind of claim is a very big issue here in Germany and it will be a much more important issue in the future than we think today. The way we predict our own life, the way we are predicted by others, through the cloud, through the way we are linked to the Internet, will be matters that impact every aspect of our lives. And, of course, this will play out in the work force -- the new German government seems to be very keen on this issue, to at least prevent the worst impact on people, on workplaces.

It's very important to stress that we are not talking about cultural pessimism. What we are talking about is that a new technology which is in fact a technology which is a brain technology, to put it this way, which is a technology which has to do with intelligence, which has to do with thinking, that this new technology now clashes in a very real way with the history of thought in the European way of thinking.

Unlike America, as you might know, in Germany we had a party for the first time in the last elections which totally comes out of the Internet. They are called The Pirates. In their beginning they were computer scientists concerned with questions of copyright and all that. But it's now much, much more. In the recent election, out of the blue, they received two percent of the votes, which is a lot for a new party which only exists on the Internet. And the voters were mainly 30, 40, 50 percent young males. Many, many young males. They're all very keen on new technologies. Of course, they are computer kids and all that. But this party, now, for the first time, reflects the way which we know, theoretically, in a very pragmatic and political way. For example, one of the main issues, as I just described, the question of the adaptation of muscles to modern systems, either in the brain or in the body, is a question of the digital Taylorism.

As far as we can see, I would say, we have three important concepts of the nineteenth century, which somehow come back in a very personalized way, just like you have a personalized newspaper. This is Darwinism, the whole question. And, in a very real sense, look at the problem with Google and the newspapers. Darwinism, but as well the whole question of who survives in the net, in the thinking; who gets more traffic; who gets less traffic, and so. And then you have the concept of communism, which comes back to the question of free, the question that people work for free. And not only those people who sit at home and write blogs, but also many people in publishing companies, newspapers, do a lot of things for free or offer them for free. And then, third, of course, Taylorism, which is a non-issue, but we now have the digital Taylorism, but with an interesting switch. At least in the nineteenth century and the early twentieth century, you could still make others responsible for your own deficits in that you could say, well, this is just really terrible, it's exhausting, and it's not human, and so on.

Now, look at the concept, for example, of multitasking, which is a real problem for the brain. You don't think that others are responsible for it, but you meet many people who say, well, I am not really good at it, and it's my problem, and I forget, and I am just overloaded by information. What I find interesting that three huge political concepts of the nineteenth century come back in a totally personalized way, and that we now, for the first time, have a political party -- a small political party, but it will in fact influence the other parties -- who address this issue, again, in this personalized way.

It's a kind of catharsis, this Twittering, and so on. But now, of course, this kind of information conflicts with many other kinds of information. And, in a way, one could argue -- I know that was the case with Iran -- that maybe the future will be that the Twitter information about an uproar in Iran competes with the Twitter information of Ashton Kutcher, or Paris Hilton, and so on. The question is to understand which is important. What is important, what is not important is something very linear, it's something which needs time, at least the structure of time. Now, you have simultaneity, you have everything happening in real time. And this impacts politics in a way which might be considered for the good, but also for the bad.

Because suddenly it's gone again. And the next piece of information, and the next piece of information -- and if now -- and this is something which, again, has very much to do with the concept of the European self, to take oneself seriously, and so on -- now, as Google puts it, they say, if I understand it rightly, in all these webcams and cell phones -- are full of information. There are photos, there are videos, whatever. And they all should be, if people want it, shared. And all the thoughts expressed in any university, at this very moment, there could be thoughts we really should know. I mean, in the nineteenth century, it was not possible. But maybe there is one student who is much better than any of the thinkers we know. So we will have an overload of all these information, and we will be dependent on systems that calculate, that make the selection of this information.

And, as far as I can see, political information somehow isn't distinct from it. It's the same issue. It's a question of whether I have information from my family on the iPhone, or whether I have information about our new government. And so this incredible amount of information somehow becomes equal, and very, very personalized. And you have personalized newspapers. This will be a huge problem for politicians. From what I hear, they are now very interested in, for example, Google's page rank; in the question how, with mathematical systems, you can, for example, create information cascades as a kind of artificial information overload. And, as you know, you can do this. And we are just not prepared for that. It's not too early. In the last elections we, for the first time, had blogs, where you could see they started to create information cascades, not only with human beings, but as well with BOTs and other stuff. And this is, as I say, only the beginning.

Germany still has a very strong anti-technology movement, which is quite interesting insofar as you can't really say it's left-wing or right-wing. As you know, very right-wing people, in German history especially, were very anti-technology. But it changed a lot. And why it took so long, I would say, has demographic reasons. As we are in an aging society, and the generation which is now 40 or 50, in Germany, had their children very late. The whole evolutionary change, through the new generation -- first, they are fewer, and then they came later. It's not like in the sixties, seventies, with Warhol. And the fifties. These were young societies. It happened very fast. We took over all these interesting influences from America, very, very fast, because we were a young society. Now, somehow it really took a longer time, but now that is for sure we are entering, for demographic reasons, the situation where a new generation which is -- as you see with The Pirates as a party -- they're a new generation, which grew up with modern systems, with modern technology. They are now taking the stage and changing society.

One must say, all the big companies are American companies, except SAP. But Google and all these others, they are American companies. I would say we weren't very good at inventing. We are not very good at getting people to study computer science and other things. And I must say -- and this is not meant as flattery of America, or Edge, or you, or whosoever -- what I really miss is that we don't have this type of computationally-minded intellectual -- though it started in Germany once, decades ago -- such as Danny Hillis and other people who participate in a kind of intellectual discussion, even if only a happy few read and react to it. Not many German thinkers have adopted this kind of computational perspective.

The ones who do exist have their own platform and actually created a new party. This is something we are missing, because there has always been a kind of an attitude of arrogance towards technology. For example, I am responsible for the entire cultural sections and science sections of FAZ. And we published reviews about all these wonderful books on science and technology, and that's fascinating and that's good. But, in a way, the really important texts, which somehow write our life today and which are, in a way, the stories of our life -- are, of course, the software -- and these texts weren't reviewed. We should have found ways of transcribing what happens on the software level much earlier -- like Patty Maes or others, just to write it, to rewrite it in a way that people understand what it actually means. I think this is a big lack.

What did Shakespeare, and Kafka, and all these great writers -- what actually did they do? They translated society into literature. And of course, at that stage, society was something very real, something which you could see. And they translated modernization into literature. And now we have to find people who translate what happens on the level of software. At least for newspapers, we should have sections reviewing software in a different way; at least the structures of software.

We are just beginning to look at this in Germany. And we are looking for people -- it's not very many people -- who have the ability to translate that. It needs to be done because that's what makes us who we are. You will never really understand in detail how Google works because you don't have access to the code. They don't give you the information. But just think of George Dyson's essay, which I love, "Turing's Cathedral." This is a very good beginning. He absolutely has the point. It is today's version of the kind of cathedral we would be entering if we lived in the eleventh century. It's incredible that people are building this cathedral of the digital age. And as he points out, when he visited Google, he saw all the books they were scanning, and noted that they said they are not scanning these books for humans to read, but for the artificial intelligence to read.

Who are the big thinkers here? In Germany, for me at least, for my work, there are a couple of key figures. One of them is Gerd Gigerenzer, who is somebody who is absolutely -- I would say he is actually avant-garde, at this very moment, because what he does is he teaches heuristics. And from what we see, we have an amputation of heuristics, through the technologies, as well. People forget certain heuristics. It starts with a calculation, because you have the calculator, but it goes much further. And you will lose many more rules of thumb in the future because the systems are doing that, Google and all the others. So Gigerenzer, in his thinking -- and he has a big Institute now -- on risk assessment, as well, is very, very important. You could link him, in a way, actually to Nassim Taleb, because again here you have the whole question of not risk assessment, the question of looking back, looking into the future, and all that.

Very important in literature, still, though he is 70 years old, 80 years old, is of course Hans Magnus Enzensberger. Peter Sloterdijk is a very important philosopher; a kind of literary figure, but he is important. But then you have, not unlike in the nineteenth or twentieth century, there are many leading figures. But I must say, as well as Gigerenzer, he writes all his books in English, we have quite interesting people, at this very moment, in law, which is very important for discussions of copyright and all that. But regarding the conversations of new technologies and human thought, they, at this very moment, don't really take place in Germany.

There are European thinkers who have cult followings -- Slajov Zizek, for example. Ask any intellectual in Germany, and they will tell you Zizek is just the greatest. He's a kind of communist, but he considers himself Stalinistic, even. But this is, of course, all labels. Wild thinkers. Europeans, at this very moment, love wild thinkers.

David Pescovitz, Boing Boing [11.4.09]

We make technology, but our technology also makes us. At the online science/culture journal Edge, BB pal John Brockman went deep -- very deep -- into this concept. Frank Schirrmacher is co-publisher of the national German newspaper FAZ and a very, very big thinker. Schirrmacher has raised public awareness and discussion about some of the most controversial topics in science research today, from genetic engineering to the aging population to the impacts of neuroscience. AtEdge, Schirrmacher riffs on the notion of the "informavore," an organism that devours information like it's food. After posting Schirrmacher's thoughts, Brockman invited other bright folks to respond, including the likes of George Dyson, Steven Pinker, John Perry Barlow, Doug Rushkoff, and Nick Bilton. Here's a taste of Schirrmacher, from "The Age of the Infomavore":

We are apparently now in a situation where modern technology is changing the way people behave, people talk, people react, people think, and people remember. And you encounter this not only in a theoretical way, but when you meet people, when suddenly people start forgetting things, when suddenly people depend on their gadgets, and other stuff, to remember certain things. This is the beginning, its just an experience. But if you think about it and you think about your own behavior, you suddenly realize that something fundamental is going on. There is one comment on Edge which I love, which is in Daniel Dennett's response to the 2007 annual question, in which he said that we have a population explosion of ideas, but not enough brains to cover them.

As we know, information is fed by attention, so we have not enough attention, not enough food for all this information. And, as we know -- this is the old Darwinian thought, the moment when Darwin started reading Malthus -- when you have a conflict between a population explosion and not enough food, then Darwinian selection starts. And Darwinian systems start to change situations. And so what interests me is that we are, because we have the Internet, now entering a phase where Darwinian structures, where Darwinian dynamics, Darwinian selection, apparently attacks ideas themselves: what to remember, what not to remember, which idea is stronger, which idea is weaker...

It's the question: what is important, what is not important, what is important to know? Is this information important? Can we still decide what is important? And it starts with this absolutely normal, everyday news. But now you encounter, at least in Europe, a lot of people who think, what in my life is important, what isn't important, what is the information of my life. And some of them say, well, it's in Facebook. And others say, well, it's on my blog. And, apparently, for many people it's very hard to say it's somewhere in my life, in my lived life.

boingboing [11.4.09]

We make technology, but our technology also makes us. At the online science/culture journal Edge, BB pal John Brockman went deep -- very deep -- into this concept. Frank Schirrmacher is co-publisher of the national German newspaper FAZ and a very, very big thinker. Schirrmacher has raised public awareness and discussion about some of the most controversial topics in science research today, from genetic engineering to the aging population to the impacts of neuroscience. AtEdge, Schirrmacher riffs on the notion of the "informavore," an organism that devours information like it's food. After posting Schirrmacher's thoughts, Brockman invited other bright folks to respond, including the likes of George Dyson, Steven Pinker, John Perry Barlow, Doug Rushkoff, and Nick Bilton. Here's a taste of Schirrmacher, from "The Age of the Infomavore":

We are apparently now in a situation where modern technology is changing the way people behave, people talk, people react, people think, and people remember. And you encounter this not only in a theoretical way, but when you meet people, when suddenly people start forgetting things, when suddenly people depend on their gadgets, and other stuff, to remember certain things. This is the beginning, its just an experience. But if you think about it and you think about your own behavior, you suddenly realize that something fundamental is going on. There is one comment on Edge which I love, which is in Daniel Dennett's response to the 2007 annual question, in which he said that we have a population explosion of ideas, but not enough brains to cover them.

As we know, information is fed by attention, so we have not enough attention, not enough food for all this information. And, as we know -- this is the old Darwinian thought, the moment when Darwin started reading Malthus -- when you have a conflict between a population explosion and not enough food, then Darwinian selection starts. And Darwinian systems start to change situations. And so what interests me is that we are, because we have the Internet, now entering a phase where Darwinian structures, where Darwinian dynamics, Darwinian selection, apparently attacks ideas themselves: what to remember, what not to remember, which idea is stronger, which idea is weaker...

It's the question: what is important, what is not important, what is important to know? Is this information important? Can we still decide what is important? And it starts with this absolutely normal, everyday news. But now you encounter, at least in Europe, a lot of people who think, what in my life is important, what isn't important, what is the information of my life. And some of them say, well, it's in Facebook. And others say, well, it's on my blog. And, apparently, for many people it's very hard to say it's somewhere in my life, in my lived life.

STUTTGARTER ZEITUNG [10.21.09]

Früher schrieben Naturwissenschaftler kurze Artikel in Fachzeitschriften; Geisteswissenschaftler dagegen dicke Wälzer. Das tun heute auch Naturwissenschaftler. Foto: dpa
 
 

Stuttgart - Die Grenzen zwischen den "Kulturen" verschwimmen. Das Geistige ist längst zum Gegenstand empirischer Naturwissenschaft geworden; die Natur zum Interpretationsobjekt für Philosophen und andere Geisteswissenschaftler. Das wird vor allem dort deutlich, wo es im weitesten Sinn um Information geht: In der Kommunikationswissenschaft, der Neuropsychologie, der Robotik und der Gedächtnisforschung. Information ist die Elementareinheit aller geistigen Prozesse, zugleich lassen sich Informationsprozesse oft mit naturwissenschaftlichen Methoden untersuchen und technisch vielseitig nutzen.

 


Weitere Artikel
zum Thema

Die Grenzen verschwimmen darüber hinaus in denjenigen Wissenschaften, die sich der facettenreichen Entstehung der menschlichen Kultur widmen. Die Zeitskalen, in denen sich Evolutionsforscher und Historiker bewegen, gehen heute nahtlos ineinander über. Forscher beschreiben die Geschichte des Denkens - und somit des Geistes - heute nicht nur, aber auch anhand von neurowissenschaftlichen und evolutionstheoretischen Modellen. Und auch in den Debatten der Gegenwart - Bioethik, Neuroethik, globaler Wandel - begegnen sich Vertreter beider "Kulturen".

Annäherung gab es auch in weiteren Punkten. Für Charles Percy Snow bestand ein wichtiger Unterschied zwischen ihnen auch in der Art der Veröffentlichungen: Naturwissenschaftler schreiben kurze Artikel in Fachzeitschriften; Geisteswissenschaftler schreiben dagegen dicke Wälzer. Das tun heute auch Naturwissenschaftler. Forscher wie Richard Dawkins oder Gregory Bateson haben damit schon in den 1970er Jahren angefangen, viele weitere sind seitdem hinzugekommen: Mathematiker wie Roger Penrose, Biologinnen wie Lynn Margulis, Geografen wie Jared Diamond oder Sprachpsychologen wie Steven Pinker (die Deutschen ziehen erst langsam nach).

Der Literaturagent John Brockman bezeichnete diese Gattung von Wissenschaftlern einst als Vertreter einer "Dritten Kultur": Sie kommen aus den "exakten" Wissenschaften, kümmern sich um grundlegende Fragen der menschlichen Existenz. Und sie schreiben darüber dicke Bücher, in denen sie - wie "echte" Geisteswissenschaftler - auf Hunderten von Seiten eine eigene These entwickeln. Inspiriert durch Brockmans Thesen begann Ende der 1990er die FAZ, sich auch im Feuilleton mit den Entwicklungen in den Naturwissenschaften auseinanderzusetzen. Seit etwa der gleichen Zeit bringt der "Spiegel" regelmäßig "Dritte-Kultur-Themen" auf die Titelseite und lockt seine Leser mit Dokumentationen über den Ursprung der Sprache, das Ende des Universums oder über Neurotheologie.

Allerdings ist das, zumindest dem Anspruch nach, nicht völlig neu. Brockmans "Dritte Kultur" entspricht ziemlich genau dem, was Hegel Realphilosophie genannt hat: die Anwendung von Logik und exaktem Denken auf die reale Welt. Der Begriff verdient eine Wiederbelebung: Im Gegensatz zur traditionellen begriffsfokussierten, literarischen Philosophie steht Realphilosophie für das systematische Nachdenken über existenzielle Fragen auf der Grundlage harter empirischer Daten. Sie fragt weiter, wo die empirische Wissenschaft an ihre Grenzen stößt - und das auf allen Organisationsebenen der Welt: Kosmos, Leben, Geist und Kultur.

In der Realphilosophie steckt auch ein noch ungenutztes Potenzial: Häufig wird geklagt, dass sich zu wenige junge Menschen für Naturwissenschaft und Technik interessieren. Entsprechend wird versucht, sie mit mehr praxisnahen Unterrichtsangeboten für diese Fächer zu gewinnen. Gleichzeitig aber wird die Chance vertan, auch über die Faszination an realphilosophischen Themen Interesse zu wecken und auf diese Weise zugleich ein Verständnis für zeitgemäßes wissenschaftliches Denken zu vermitteln.

THIRD CULTURE NEWS [10.21.09]

Klimawandel? Leben bis 200? Genmanipulation? Nicht unbedingt. Führende Wissenschafter unserer Zeit beantworten die Frage: Was ist Ihre gefährlichste Idee?

Er ist „eine Art Denker, wie es ihn in Europa nicht gibt“, so das Turiner Weltblatt „La Stampa“. Der New Yorker Autor, Literaturagent, Unternehmens- und Politikberater John Brockman, 68, ist ein bunter Hund, ein Querdenker, der es zustande bringt, Menschen und Gedanken unter einen Hut zu bringen, die auf den ersten Blick gar nicht zusammengehören. Eines seiner zahlreichen Bücher trägt den Titel: „Einstein, Gertrude Stein, Wittgenstein und Frankenstein“ (1993). Brockman liebt die Provokation, er ist ein großer Freund der Kunst, der Wissenschaft, der Technik, der Medien und des Internets. Ein intellektueller Katalysator.

Fasziniert von neuen, ausgefallenen Ideen, ist er ein emsiger Networker. Laut seinem Freund Richard Dawkins verfügt er über „das beneidenswerteste Adressbuch in der englischsprachigen Welt“. 1997 schuf er mit der Internetplattform Edge (www.edge.org) eine Art Facebook der Denker, wo Geistesgrößen nicht nur eigene Ideen und Projekte vorstellen, sondern auch die Gedanken anderer kommentieren, „bewusst im Geist der Provokation“, wie Brockman sagt. Nach seinen eigenen Worten präsentiert Edge „spekulative Ideen, erkundet Neuland auf den Gebieten der Evolutionsbiologie, Genetik, Informatik, Neurophysiologie, Psychologie und Physik“ und gibt Antworten auf Fragen wie: Was sind die Ursprünge des Universums, des Lebens, des Geistes? Aus den spannendsten Antworten kreierte Brockman ein Buch, das kürzlich auf Deutsch erschienen ist. Ergänzt um Beiträge österreichischer Top-Wissenschafter veröffentlicht profil Auszüge aus dem Werk mit dem Titel: „Was ist Ihre gefährlichste Idee? Die führenden Wissenschaftler unserer Zeit denken das Undenkbare“, herausgegeben von John Brockman. Aus dem Englischen von Hans Günter Holl. ©: 2009, S. Fischer Verlag GmbH, Frankfurt am Main, 340 S., EUR 10,30.

J. Craig Venter: Neue Genomwerkzeuge statt politischer Korrektheit
Die Technik der DNA-Sequenzierung schreitet immer schneller voran, und wir nähern uns dem Zeitpunkt, an dem es nicht mehr ein paar Humangenomsequenzen geben wird, sondern komplexe Datenbanken mit hunderttausenden und dann Millionen von vollständigen Genomen. Binnen eines Jahrzehnts werden wir rasch die kompletten genetischen Codes von Einzelnen mitsamt ihrem Repertoire an Phänotypen sammeln (…).
Zwar können wir das Verhalten anderer Säugetiere auf Gene und Genetik zurückführen, doch wenn es um den Menschen geht, scheinen wir in die Vorstellung verliebt zu sein, dass wir alle von Geburt gleich sind, dass jedes Baby eine „leere Tafel“ ist. Wenn sich aber Sequenzen von immer mehr Säugetier- und auch Humangenomen ansammeln, werden sie uns (…) zwingen, solche politisch korrekten Deutungen aufzugeben, da die neuen Genomwerkzeuge es erlauben werden, genau zwischen Erbanlage und Umwelteinfluss zu unterscheiden. (…) Die Gefahr liegt in dem, was wir bereits wissen – dass wir nicht alle von Geburt gleich sind –, und eine zusätzliche Gefahr resultiert aus der Fähigkeit, die genetische Seite der Gleichung zu quantifizieren (…).

J. Craig Venter, 63, Entschlüssler des Human­genoms, ist Gründungspräsident des J. Craig Venter ­Institute, das sich mit
synthetischer Biologie befasst.

Paul C. W. Davies: Klimaerwärmung für ein besseres Leben
Einige Länder, darunter die Vereinigten Staaten und Australien, halten daran fest, die globale Klimaerwärmung zu leugnen (…). Andere (…) befürchten das Schlimmste und wollen die Emission der Treibhausgase drastisch reduzieren. Beide Positionen sind irrelevant, da der Kampf aussichtslos ist. Trotz des jüngsten Anstiegs der Erdölpreise ist der Stoff immer noch nicht teuer genug, um ihn nicht zu verfeuern. (…) Die Verfechter drastischer Gegenmaßnahmen drohen damit, dass eine Erderwärmung das Leben verschlechtern würde. Meine gefährliche Idee ist, dass dies wahrscheinlich nicht zutrifft.

Gewiss wird (…) der Meeresspiegel steigen, was zur Überschwemmung einiger dicht besiedelter oder fruchtbarer Küstenregionen führt. Im Gegenzug könnte jedoch Sibirien zum Brotkorb der Erde aufsteigen. (…) Nichts spricht dafür, dass alles schlechter würde – doch zweifellos werden wir uns umstellen müssen, und Umstellungen sind immer schmerzhaft.

Paul C. W. Davies, 63, ist Physiker und Kosmologe an der Arizona State University und Verfasser des Buchs „So baut man eine Zeitmaschine. Eine Gebrauchsanweisung“.

Rodney Brooks: Allein im All und daher religiös
Mir bereitet die meisten Sorgen, ob es nun zutrifft oder nicht, dass der spontane Übergang von der nicht belebten zur belebten Materie höchst unwahrscheinlich sein könnte. Wir wissen, dass er einmal stattgefunden hat, doch was wäre, wenn wir in den nächsten Jahrzehnten viele Beweise dafür zusammenbekämen, dass er ausgesprochen selten vorkommt? (…) Im Sonnensystem allein zu sein, das wäre nicht so erschreckend, aber allein in der Galaxie – oder, schlimmer noch, allein im Universum –, das würde uns wohl zur Verzweiflung und wieder in die Arme der Religion als Trösterin treiben.

Rodney Brooks, 55, leitet das MIT Computer Science and Artificial Intelligence Laboratory. Er ist Autor des Buchs „Menschmaschinen. Wie uns die Zukunftstechnologien neu erschaffen“.

Paul W. Ewald: Gesundheit als Todesurteil für Ärzte
Meine gefährliche Idee ist, dass wir fast alle Informationen beisammenhaben, um ein neues goldenes Zeitalter der Medizin einzuleiten. (…) In diesem goldenen Zeitalter müsste es uns innerhalb relativ kurzer Zeit und mit viel weniger Geld als gemeinhin angenommen gelingen, den meisten schweren Krankheiten vorzubeugen. Das klingt doch gut. Warum sollte es gefährlich sein?

Eine Vielzahl von Gefahren erwächst daraus, dass den Status quo infrage stellende Ideen die Existenzgrundlage nicht ­weniger Menschen bedrohen. (…) Stellen wir uns vor, was geschähe, wenn man der großen Gebrechen – Krebs, Arteriosklerose, Schlaganfall, Diabetes und so fort – weitgehend durch Vorbeugung Herr ­würde.

Pharmariesen würden bei sinkender Nachfrage nach verschreibungspflichtigen Medikamenten schrumpfen. Das Ansehen der Ärzte würde nachlassen, weil man sie nicht mehr brauchte, um das Leben zu verlängern.

Paul W. Ewald, 55, ist Evolutionsbiologe und Leiter des Program in Evolutionary Medicine an der University of Louisville. Von ihm stammt das Buch „Plague Time“.

Martin Rees: Wissenschaft als Katastrophe
Meinungsumfragen (zumindest in Großbritannien) zeugen neben einer grundsätzlich positiven Einstellung gegenüber der Wissenschaft auch von der weit verbreiteten Sorge, dass sie „aus dem Ruder laufen“ könnte. Diese Idee ist gefährlich, weil sie als eine selbst erfüllende Prophezeiung wirken könnte. Im 21. Jahrhundert wird Technik die Welt schneller verändern als jemals zuvor: das Erdklima, unseren Lebensstil, sogar die menschliche Natur selbst. (…) Daher werden die Entscheidungen, die wir einzeln und kollektiv treffen, letzten Endes dazu beitragen, ob sich die wissenschaftlichen Ergebnisse des 21. Jahrhunderts segensreich oder verheerend auswirken.

Sir Martin Rees, 67, ist Präsident der Royal So­ciety, Professor für Astrophysik und Rektor des Trinity College in Cambridge. Eines seiner zahlreichen Bücher trägt den Titel „Unsere letzte Stunde. Warum die moderne Naturwissenschaft das Überleben der Menschheit bedroht“.

Samuel Barondes: Persönlichkeitsver-änderung auf Rezept
In den letzten Jahrzehnten haben sich gewisse Psychopharmaka zu einem weiteren Hilfsmittel für jene entwickelt, die ihr Leben verändern wollen. Ursprünglich als kurzfristige Medikation gegen episodische psychische Störungen wie schwere Depressionen gedacht, verschreibt man sie heute weithin auf Dauer, um gewisse Persönlichkeitsveränderungen herbeizuführen (…). Diese Medikamente wirken direkt auf Schaltkreise im Gehirn ein, die Emotionen steuern, und können so wünschenswerte Entwicklungen in Gang setzen, die durch bloße Willenskraft oder Verhaltensübungen kaum zu erreichen wären. Millionen nehmen sie Jahr für Jahr regelmäßig ein, um ihre Persönlichkeit umzumodeln.

Gleichwohl ist die Idee, solche Medikamente für die Persönlichkeitsveränderung zu benutzen, nach wie vor gefährlich – und nicht nur deshalb, weil es an sich erbärmlich, unmoralisch oder sozial bedrohlich wäre, die Gehirnchemie zu manipulieren (…). Der Grund zur Vorsicht liegt eher darin, dass es noch keine kontrollierten Studien darüber gibt, wie sich solche Substanzen bei Dauergebrauch auf die Persönlichkeit auswirken.

Samuel Barondes, 76, ist Direktor des Center for Neurobiology and Psychiatry an der University of California in San Francisco. Von ihm stammt unter anderem das Buch: „Better Than Prozac. Creating the Next ­Generation of Psychiatric Drugs“.

John Horgan: Der Mensch hat keine Seele
Ich möchte hier näher auf die gefährliche (und wahrscheinlich wahre) Idee eingehen, dass der Mensch keine Seele hat. (…) Bis vor Kurzem bot es sich an, diese Mutmaßung mit einem großen Fragezeichen zu versehen, da es der Hirnforschung noch nicht gelungen war, die Kognition auf spezifische neurale Vorgänge zurückzuführen. (…) Doch jüngst haben sich die Lücken geschlossen, als Neurowissenschafter (…) begannen, den so genannten neuralen Code zu entschlüsseln, das heißt jene Programme, die elektrochemische Impulse im Gehirn in Wahrnehmungen, Erinnerungen, Entscheidungen, Emotionen und andere Grundbausteine des Bewusstseins umwandeln. (…) Wird dieses Wissen uns einmal befreien oder versklaven? Beamte des Pentagons, das weltweit das meiste Geld für die Erforschung des neuralen Codes ausgibt, haben bereits offen erwogen, Cyborg-Krieger zu planen, die über Gehirnimplantate fernsteuerbar wären (…) Wenn sich unser Geist programmieren ließe wie ein Computer, dann würden wir vielleicht schließlich den Glauben an eine un­sterbliche, unantastbare Seele aufgeben – es sei denn, wir würden uns darauf programmieren.

John Horgan, 56, leitet das Center for Science Writings am Stevens Institute of Technology. Sein jüngstes Buch heißt „Rational Mysticism: Spirituality Meets Science in the Search for Enlightenment“.

Peter C. Aichelburg: Zeitreise in die Kälte
Ein Wurmloch ist eine Art Tunnel zu entfernten Gebieten im Universum. Solche Objekte können sich auch als Zeitmaschinen entpuppen, das heißt, das Wurmloch verbindet nicht nur entfernte Gebiete, sondern auch unterschiedliche Zeiten. Ein Objekt, welches dort hineingerät, kommt irgendwo in der Weite des Kosmos wieder heraus, eventuell zu einer früheren Zeit. Sollte die Erde tatsächlich in so ein Wurmloch fallen, würden wir uns auf einmal in einer völlig neuen Umgebung im Universum befinden, da wäre es gut, wenn unsere Sonne gleich mitkäme. Was die Zeitreise betrifft, sollten wir keinesfalls in eine Zeit reisen, zu der es noch keine Sterne gab und das Universum dicht und heiß war, denn das wäre sehr ungemütlich.

Peter C. Aichelburg, 68, emeritierter Physikprofessor und Kuratoriumsvorsitzender des Europäischen Forums Alpbach, forschte an etlichen internationalen Spitzeninstituten. Schwerpunkt: Gravitation und Kosmologie.

Ray Kurzweil: Das Leben wird unendlich
Die Kapazitäten der Informationstechniken verdoppeln sich jährlich und beziehen überdies Bereiche ein, die über das Computerwesen hinausgehen – zum Beispiel die Biologie oder die menschliche Intelligenz. (…) Wir finden auch Mittel, um die ursprünglichen, grundlegenden Informationsprozesse der Biologie umzuprogrammieren (…) Wenn wir linear denken, scheint die Vorstellung, alle Krankheiten und Alterungsprozesse auszuschalten, in der fernen Zukunft zu liegen, wie es auch 1990 beim Genomprojekt der Fall war. Wenn wir dagegen die alljährliche Verdoppelung einrechnen, so liegt die Aussicht einer radikalen Ausdehnung des Lebens nur noch einige Jahrzehnte vor uns.

Ray Kurzweil, 61, ist Erfinder und Technologe. Sein jüngstes Buch heißt „The Singularity Is Near: When Humans Transcend Biology“.

Mihaly Csikszentmihalyi: Der freie Markt vernichtet alles
Eine der gefährlichsten Ideen überhaupt in unserer heutigen Kultur ist die, dass letzten Endes der „freie Markt“ über alle politischen Entscheidungen gebietet und dass es eine „unsichtbare Hand“ gibt, die uns in die vorteilhafteste Zukunft lenkt, sofern man stets den freien Markt zur Geltung kommen lässt. Dieser mystische Glaube ruht zwar auf durchaus soliden empirischen Fundamenten, wenn man ihn jedoch als die endgültige Lösung aller Probleme der Menschheit auffasst, so droht er sowohl die materiellen Ressourcen als auch die kulturellen Leistungen zu vernichten, die unsere Spezies unter großen Mühen geschaffen hat.

Mihaly Csikszentmihalyi, 75, Psychologe, leitet das Quality of Life Research Center an der Claremont Graduate University. Von ihm stammt das Buch „Flow – der Weg zum Glück“.

Josef Smolen: Unwirksame Medikamente
Gegen die rheumatoide Arthritis, eine chronische Gelenksentzündung, die etwa einen von 100 Erwachsenen betrifft und Schmerz, Gelenkszerstörung, Frühinvalidität und verfrühte Mortalität verursacht, sind im letzten Jahrzehnt neue Medikamente entwickelt worden, die so genannten Biologika. Durch sie werden krankheitsauslösende Moleküle oder Zellen zielgerichtet blockiert oder zerstört. Trotz vieler Unterschiede zwischen Biologika ist die Ansprechrate nahezu gleich. Wir wissen nicht, warum.
Derzeit gibt es keine Möglichkeit, den Behandlungseffekt beim Einzelnen vorherzusagen, also individualisierte Therapien zu entwickeln. Dies wäre nur durch Definition prognostischer Faktoren im Rahmen einer exakt vergleichenden, hierarchischen Studie unter Einsatz biologischer Marker möglich. Der Vorteil eines positiven Ergebnisses: „maßgeschneiderte“ Therapien – zum Vorteil der Betroffenen, aber auch des Gesundheitssystems, weil Medikamente bei jenen, denen sie nicht helfen, nicht nutzlos eingesetzt werden müssten. Der Ansatz birgt aber auch eine große Gefahr, die Angst und auch Hilflosigkeit auslöst: Wie geht man mit jenen um, bei denen vorhergesagt wird, dass sie auf keine Therapie ansprechen würden?

Josef Smolen, 59, Internist und Immunologe, ist Leiter der Universitätsklinik für Innere Medizin III an der Medizinischen Universität Wien (MUW) mit Forschungsschwerpunkt entzündliche rheumatische Erkrankungen. Im Vorjahr war er bestpublizierender Professor der MUW.

Georg Wick: Gefährliche Impfgegner
Die Menschheit hat im 20. und 21. Jahrhundert so viele Jahre an mittlerer Lebenserwartung gewonnen wie in den letzten 10.000 Jahren davor. Impfungen haben zu dieser Tatsache wesentlich beigetragen. Impfgegner ziehen mit zumeist irrationalen Argumenten gegen die segensreiche Wirkung dieser immunologischen Erfolgsstory zu Felde. Eine „gefährliche“ Entdeckung unseres Labors könnte bei falscher Interpretation als Beweis für diese schädlichen Aktivitäten herangezogen werden.

Zellen aller Lebewesen – von Bakterien bis zum Menschen – produzieren bei Konfrontation mit Stressfaktoren so genannte Stresseiweißstoffe. Erstmals nach Anwendung von Hitze beobachtet, werden diese Eiweißstoffe auch als Hitzeschockproteine (HSPs) bezeichnet. Die stammesgeschichtlich sehr alten HSPs zeigen in allen Zellen große biochemische Ähnlichkeit. Aufgrund von durchgemachten Infektionen bzw. Impfungen besitzen alle Menschen Immunität gegen mikrobielle HSPs. Wenn wir unser Gefäßsystem klassischen Arteriosklerose-Risikofaktoren wie Bluthochdruck, hohes Cholesterin, Rauchen etc. aussetzen, produzieren die Zellen in der Innenauskleidung der Arterien HSPs. Aufgrund der Ähnlichkeit menschlicher und bakterieller HSPs müssen wir dann für die schon vorhandene schützende Immunität gegen bakterielle HSPs mit einer immunologischen „Verwechslungsreaktion“ gegen unsere eigenen Gefäße „bezahlen“: eine so genannte Autoimmunreaktion. Daran ist aber eben nicht das Immunsystem „schuld“, sondern unsere unnatürliche, die Gefäßzellen schädigende Lebensweise. Wir versuchen nun im Rahmen eines von der EU geförderten Projekts auf Basis dieser Entdeckung einen Impfstoff gegen Arteriosklerose zu entwickeln, gegen die Impfgegner mobilmachen und ein segensreiches Projekt gefährden könnten – eine gefährliche Idee.

Georg Wick, 70, Pathologe und Alternsforscher, war bis 2003 Vorstand des Instituts für Pathophysiologie der Universität Innsbruck und von 2003 bis 2005 Präsident des Wissenschaftsfonds FWF.

Clifford Pickover: Zu viele Welten – oder zu wenige
Unser Wunsch, unterhaltsame virtuelle Realitäten zu erleben, nimmt rapide zu. Da auch unser Verständnis des menschlichen Gehirns rasch wächst, werden wir zum einen fantasierte Realitäten schaffen und zum anderen Erinnerungen, um diese Scheinbilder zu unterstützen. So wird es zum Beispiel eines Tages möglich sein, eine Reise ins Mittelalter zu simulieren, und – um das Ganze realistisch zu gestalten – dafür zu sorgen, dass man sich wirklich im Mittelalter fühlt. Man könnte falsche Erinnerungen implantieren und sie zeitweise die realen überdecken lassen. Das sollte leicht möglich sein, zumal wir den Geist mit der Droge Dimethyltryptamin (DMT) bereits dazu bringen können, reich ausstaffierte virtuelle Welten zu erzeugen, voll mit prachtvollen Schlössern und fremdartigen Fabelwesen. (…)

In Zukunft wird jeder zehn simulierte Leben haben können. (…) Wenn sich dieses Verhältnis von einem realen Leben zu zehn simulierten als repräsentativ für die menschliche Erfahrung erweisen sollte, so bedeutet dies, dass wir derzeit nur eine von zehn Chancen nutzen, wirklich in der Gegenwart zu leben.

Clifford Pickover ist Informatiker am IBM
T. J. Watson Research Center. Das jüngste seiner zahlreichen Bücher trägt den Titel: „Sex, Drugs, Einstein, and Elves: Sushi, Psychedelics, Parallel Universes, and the Quest for Transcendence“.

Lawrence M. Krauss: Wissenschaft, die kein Wissen schafft
Oft wird das höchste Ziel der Physik als eine „Theorie von allem“ beschrieben, die es erlauben würde, sämtliche grundlegenden Naturgesetze auf ein T-Shirt zu drucken (auch wenn es ein solches T-Shirt nur in zehn Dimensionen geben könnte). Seit jedoch anerkannt ist, dass die Hauptenergie des Universums im leeren Raum sitzt – etwas so Eigentümliches, dass es im Rahmen unserer heutigen theoretischen Vorstellungen kaum nachvollziehbar erscheint –, eruieren immer mehr Physiker die Idee, ob die Physik nicht eine „Umweltwissenschaft“ sein könnte, ob die von uns beobachteten Naturgesetze lediglich kontextabhängig gelten und es endlos viele verschiedene Universen mit ganz unterschiedlichen Naturgesetzen geben könnte.

Lawrence M. Krauss, 55, ist Direktor des Center for Education and Research in Cosmology and Astrophysics an der Case Western Reserve University. Sein jüngstes Buch trägt den Titel „Hiding in the Mirror. The Mysterious Allure of Extra Dimensions, from Plato to String Theory and Beyond“.

Michael Freissmuth: Giftmorde
Die eingehende Beschäftigung mit Pharmaka (gut = Arznei; schlecht = Gift) lässt unendlich viele Möglichkeiten erkennen, Menschen zu vergiften. Da blühen Allmachtsfantasien auf: Gegenwehr bei einem Terrorüberfall im Flugzeug – eine Kugelschreiberhülle als Blasrohr mit kleinen Curare-getränkten Nadeln; Kundenbindung bei Getränken und Nahrungsmitteln – durch Manipulation des Belohnungszen­trums mit einem (noch nicht gesetzlich erfassten) Suchtgift; der perfekte Mord – ein Gift, das keine Spuren hinterlässt, weil es eine körpereigene Substanz ist. Die Überprüfung der Realität zeigt: Das ist alles schon da (gewesen) – ohne pharmakologische Beratung. Die rasche Injektion von Kaliumchlorid ist z. B. schon in einem Wiener Krankenhaus praktiziert worden. Als Geschäftsidee ausbaufähig ist ein Service, das DNA-Spuren Unbeteiligter anbietet, um irreführende Spuren zu hinterlassen, z. B. in Lippenstift eingearbeitet. Man kann diese dann strategisch auf Bekennerschreiben platzieren, das Gift in eine Praline stecken und dem Opfer das Gift tatsächlich im Kaffee beibringen. Kaffee eignet sich aus vielen Gründen gut (deckende Farbe, starker Eigengeschmack, …). Daher ein Rat an alle, die einen Giftmord fürchten: kein Kaffee, Rotwein, Whisky, Cognac … Sonstige Tipps: gerne – nach entsprechendem Sponsoring der Forschung am Pharmakologischen Institut der Medizinischen Universität Wien.

Michael Freissmuth, 49, Mediziner (promovierte sub auspiciis praesidentis), mehrfach ausgezeichneter Pharmakologe. Forschungsschwerpunkt: Signalverarbeitung in Nerven- und Krebszellen und deren ­Beeinflussung durch Pharmaka.

Jordan Pollack: Wissenschaft als Religion
Wir Wissenschafter denken gerne, dass wir etwas Besonderes wüssten. Statt Überzeugungen zu pflegen, die auf dem Glauben an unsichtbare allmächtige Götter oder auf überlieferten, von mündlichen Kulturen transkribierten Pergamenten beruhen, besitzen wir die wissenschaftliche Methode, um zu entdecken und zu erkennen. (…) Insofern ist es eine sehr gefährliche Idee, die Wissenschaft lediglich als eine andere Form von Religion zu betrachten. (…) Jüngst betonte bei einem öffentlichen Kongress über Grundprobleme der modernen Technik ein Redner nach dem anderen die Gefahren der Erderwärmung: Anstieg des Meeresspiegels um knapp zwanzig Zentimeter, Überschwemmung von Städten, mehr Orkane der Kategorie 5 und so fort. Es war fast eine Umkehrung der positivistischen Verheißungen einer Techno-Utopie mit wunderbaren Fortschritten in der Medizin, im Computerwesen und im Waffenbau, die eine große Blütezeit der Wissenschaft im späten 20. Jahrhundert erlauben würden. Eine Freundin wies mich darauf hin, dass diese Referenten vor der Einführung von PowerPoint vielleicht Pappschilder mit der Aufschrift „Das Ende ist nahe!“ getragen hätten.

Jordan Pollack, 51, leitet an der Brandeis University ein Forschungslabor für die dynamische und evolutionäre Organisation von Maschinen.

Haim Harari: Demokratie mit Ablaufdatum
Die Demokratie könnte vor ihrem Abgang stehen. Künftige Historiker mögen feststellen, dass sie bloß eine Jahrhundert­episode war: Sie wird verschwinden. Dies ist eine traurige, wahrhaft gefährliche, jedoch sehr realistische Idee (oder besser Prognose). Fallende Staatsgrenzen, grenzüberschreitender Handelsverkehr, miteinander verschmelzende Wirtschaftssysteme, sofortiger weltweiter Informationsfluss und zahlreiche andere Merkmale unserer modernen Gesellschaft tragen allesamt zu multinationalen Strukturen bei. Wenn man diesen unumkehrbaren Trend extrapoliert, so zeichnet sich der ganze Planet als eine politische Einheit ab, in der allerdings die antidemokratischen Kräfte eine klare Mehrheit bilden. Diese wächst heute bereits stetig, bedingt durch demografische Faktoren. Während sämtliche demokratischen Nationen ein schwaches, rückläufiges oder negatives Bevölkerungswachstum aufweisen, vermehren sich die antidemokratischen und ungebildeten rapide. Zudem bleiben in den Ersteren die meisten gebildeten Familien klein, wohingegen die am wenigsten gebildeten sehr kinderreich sind. (…)

Haim Harari, 68, theoretischer Physiker, war Präsident des israelischen Weizmann Institute of Science, leitet das Gründungskomitee für das Exzellenzinstitut I.S.T. Austria in Maria Gugging/Klosterneuburg.

The New york Times [10.5.09]

One of the oldest names in computing is joining the race to sequence the genome for $1,000. On Tuesday, I.B.M. plans to give technical details of its effort to reach and surpass that goal, ultimately bringing the cost to as low as $100, making a personal genome cheaper than a ticket to a Broadway play.

The project places I.B.M. squarely in the middle of an international race to drive down the cost of gene sequencing to help move toward an era of personalized medicine. The hope is that tailored genomic medicine would offer significant improvements in diagnosis and treatment.

I.B.M. already has a wide range of scientific and commercial efforts in fields like manufacturing supercomputers designed specifically for modeling biological processes. The company’s researchers and executives hope to use its expertise in semiconductor manufacturing, computing and material science to design an integrated sequencing machine that will offer advances both in accuracy and speed, and will lower the cost.

“More and more of biology is becoming an information science, which is very much a business for I.B.M.,” said Ajay Royyuru, senior manager for I.B.M.’s computational biology center at its Thomas J. Watson Laboratory in Yorktown Heights, N.Y.

DNA sequencing began at academic research centers in the 1970s, and the original Human Genome Project successfully sequenced the first genome in 2001 and cost roughly $1 billion.

Since then, the field has accelerated. In the last four to five years, the cost of sequencing has been falling at a rate of tenfold annually, according to George M. Church, a Harvard geneticist. In a recent presentation in Los Angeles, Dr. Church said he expected the industry to stay on that curve, or some fraction of that improvement rate, for the foreseeable future.

At least 17 startup and existing companies are in the sequencing race, pursuing a range of third-generation technologies. Sequencing the human genome now costs $5,000 to $50,000, although Dr. Church emphasized that none of the efforts so far had been completely successful and no research group had yet sequenced the entire genome of a single individual.

The I.B.M. approach is based on what the company describes as a “DNA transistor,” which it hopes will be capable of reading individual nucleotides in a single strand of DNA as it is pulled through an atomic-size hole known as a nanopore. A complete system would consist of two fluid reservoirs separated by a silicon membrane containing an array of up to a million nanopores, making it possible to sequence vast quantities of DNA at once.

The company said the goal of the research was to build a machine that would have the capacity to sequence an individual genome of up to three billion bases, or nucleotides, “in several hours.” A system with this power and speed is essential if progress is to be made toward personalized medicine, I.B.M. researchers said.

At the heart of the I.B.M. system is a novel mechanism, something like nanoscale electric tweezers. This mechanism repeatedly pauses a strand of DNA, which is naturally negatively charged, as an electric field pulls the strand through a nanopore, an opening just three nanometers in diameter. A nanometer, one one-billionth of a meter, is approximately one eighty-thousandth the width of a human hair.

The I.B.M. researchers said they had successfully used a transmission electron microscope to drill a hole through a semiconductor device that was intended to “ratchet” the DNA strand through the opening and then stop for perhaps a millisecond to determine the order of four nucleotide bases — adenine, guanine, cytosine or thymine — that make up the DNA molecule. The I.B.M. team said that the project, which began in 2007, could now reliably pull DNA strands through nanopore holes but that sensing technology to control the rate of movement and to read the specific bases had yet to be demonstrated.

Despite the optimism of the I.B.M. researchers, an independent scientist noted that various approaches to nanopore-based sequencing had been tried for years, with only limited success.

“DNA strands seem to have a mind of their own,” said Elaine R. Mardis, co-director of the genome center at Washington University in St. Louis, noting that DNA often takes a number of formations other than a straight rod as it passes through a nanopore.

Dr. Mardis also said previous efforts to create uniform silicon-based nanopore sensors had been disappointing.

One of the crucial advances needed to improve the quality of DNA analysis is to be able to read longer sequences. Current technology is generally in the range of 30 to 800 nucleotides, while the goal is to be able to read sequences of as long as one million bases, according to Dr. Church, who spoke in July at a forum sponsored by Edge.org, a nonprofit online science forum.

Other approaches to faster, cheaper sequencing include a biological nanopore approach being pursued by Oxford Nanopore Technologies, a start-up based in England, and an electron microscopy-based system being designed by Halcyon Molecular, a low-profile Silicon Valley start-up that has developed a technique for stretching single strands of DNA laid out on a thin carbon film. The company may be able to image strands as long as one million base pairs, said Dr. Church, who is an adviser to the company, and to several others.

“To bring about an era of personalized medicine, it isn’t enough to know the DNA of an average person,” said Gustavo Stolovitzky, an I.B.M. biophysicist, who is one of the researchers who conceived of the I.B.M. project. “As a community, it became clear we need to make efforts to sequence in a way that is fast and cheap.”

NEWSWEEK [9.6.09]

Language may shape our thoughts.

When the Viaduct de Millau opened in the south of France in 2004, this tallest bridge in the world won worldwide accolades. German newspapers described how it "floated above the clouds" with "elegance and lightness" and "breathtaking" beauty. In France, papers praised the "immense" "concrete giant." Was it mere coincidence that the Germans saw beauty where the French saw heft and power? Lera Boroditsky thinks not.

A psychologist at Stanford University, she has long been intrigued by an age-old question whose modern form dates to 1956, when linguist Benjamin Lee Whorf asked whether the language we speak shapes the way we think and see the world. If so, then language is not merely a means of expressing thought, but a constraint on it, too. Although philosophers, anthropologists, and others have weighed in, with most concluding that language does not shape thought in any significant way, the field has been notable for a distressing lack of empiricism—as in testable hypotheses and actual data.

That's where Boroditsky comes in. In a series of clever experiments guided by pointed questions, she is amassing evidence that, yes, language shapes thought. The effect is powerful enough, she says, that "the private mental lives of speakers of different languages may differ dramatically," not only when they are thinking in order to speak, "but in all manner of cognitive tasks," including basic sensory perception. "Even a small fluke of grammar"—the gender of nouns—"can have an effect on how people think about things in the world," she says.

As in that bridge. In German, the noun for bridge, Brücke, is feminine. In French, pontis masculine. German speakers saw prototypically female features; French speakers, masculine ones. Similarly, Germans describe keys (Schlüssel) with words such as hard, heavy, jagged, and metal, while to Spaniards keys (llaves) are golden, intricate, little, and lovely. Guess which language construes key as masculine and which as feminine? Grammatical gender also shapes how we construe abstractions. In 85 percent of artistic depictions of death and victory, for instance, the idea is represented by a man if the noun is masculine and a woman if it is feminine, says Boroditsky. Germans tend to paint death as male, and Russians tend to paint it as female.

Language even shapes what we see. People have a better memory for colors if different shades have distinct names—not English's light blue and dark blue, for instance, but Russian'sgoluboy and sinly. Skeptics of the language-shapes-thought claim have argued that that's a trivial finding, showing only that people remember what they saw in both a visual form and a verbal one, but not proving that they actually see the hues differently. In an ingenious experiment, however, Boroditsky and colleagues showed volunteers three color swatches and asked them which of the bottom two was the same as the top one. Native Russian speakers were faster than English speakers when the colors had distinct names, suggesting that having a name for something allows you to perceive it more sharply. Similarly, Korean uses one word for "in" when one object is in another snugly (a letter in an envelope), and a different one when an object is in something loosely (an apple in a bowl). Sure enough, Korean adults are better than English speakers at distinguishing tight fit from loose fit.

THE NEW YORKER [8.23.09]

ABSTRACT: LETTER FROM CALIFORNIA about Elon Musk and electronic cars. In a dressing room above the “Late Show with David Letterman” stage, the electronic-car magnate Elon Musk sat on a sofa, eating cookies. Musk, thirty-eight, is the chairman, C.E.O., and product architect of Tesla Motors, and he was appearing on Letterman to show off the company’s newest design: a sleek sedan called the Model S. After co-founding the Internet start-ups Zip2 and PayPal, when he was in his twenties, in 2002 Musk launched the Space Exploration Technologies Corporation (SpaceX), with the ultimate goal of colonizing Mars; the company recently won a $1.6-billion contract with NASA to resupply the Space Station. In 2004 he provided Tesla with its initial funding, in the belief that electronic vehicles, or E.V.s, together with solar power, will help wean the world off oil. For decades, E.V.s resembled hovercraft or mobile eggs, and their lead-acid battery packs were costly and sickly. Last fall, Tesla began making the only highway-capable E.V. now available: the Roadster, a $109,000 sports car that goes from zero to sixty in less than four seconds and has a range of two hundred and forty-four miles. Powered by a lithium-ion battery, the Roadster was designed to prove that E.V.s can not just compete but excel. Musk plans to cut the price for each of Tesla’s succeeding models more or less in half and seize the market from the top down. The Roadster can be found in the garages of George Clooney, Matt Damon, Leonardo DiCaprio, and David Letterman. Musk was born in Pretoria, South Africa, and he attended Queen’s University in Ontario and the University of Pennsylvania. In 2004, Musk met engineer Martin Eberhard, who proposed to build a sports car with a lithium-ion battery. Musk agreed to underwrite the company. It took four and a half years and $140 million. As new car sales in America are expected to fall to ten million this year, the big automakers are developing E.V.s of their own. Mentions Ford, Chrysler, Renault, Nissan, and Mitsubishi. Describes Musk’s appearance on Letterman’s show. In late March, Tesla unveiled the Model S at a cocktail party at SpaceX’s headquarters, south of L.A. Mentions Anthony Kiedis, Rick Rubin, and Gov. Arnold Schwarzenegger. For now, if you plug your Roadster into one of Tesla’s seventy-ampere wall boxes (which the company will install in your garage for $3,000), it takes nearly four hours to recharge the car completely. Mark Duvall, of the Electric Power Research Institute, says, of the cost of building charging stations across the country, “If you want to use E.V.s to drive between cities, we’re probably into the hundreds of billions.” Shai Agassi, an Israeli software whiz who runs the company Better Place, plans to build a huge charging network. In early April, Tesla threw a cocktail party at the National Building Museum, in Washington, D.C., to show off the Model S to congressmen and other government officials. Mentions Diarmuid O’Connell, Sen. Maria Cantwell, Sen. Jeff Bingaman, and Mike Carr. Sen. Cantwell acknowledged that there’s no political will in Congress to prod consumers to switch to electric by taxing gasoline heavily, as is done in Europe.

 

read the full text...
read the full text...
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
To get more of The New Yorker's signature mix of politics, culture and the arts: Subscribe now

Newyorker.com has a complete archive of The New Yorker, back to 1925. The complete archive is available to subscribers in the digital edition. If you subscribe to the magazine, register now to get access. If you don't, subscribe now.

You can also buy online access to a single issue. Individual back issues are available for sale through our customer-service department, at 1-800-825-2510.

All articles published before May, 2008, can be found in “The Complete New Yorker,” which is available for purchase on hard drive and DVD. Most New Yorker articles published since December, 2000, are available through Nexis.

To search for New Yorker cartoons and covers, visit the Cartoon Bank.

Read more http://www.newyorker.com/reporting/2009/08/24/090824fa_fact_friend#ixzz17Ok8X2Wl

http://search.barnesandnoble.com/Whats-Next/Max-Brockman/e/9780307389312/? [8.18.09]

Nearly impossible to put down: engaging original essays from brilliant young scientists on their work — and its fascinating social, ethical, and philosophical implications.

Ed Regis, FRANKFURTER ALLGEMEINE ZEITUNG [8.14.09]

Der New Yorker Literaturagent John Brockman und seine Edge Foundation hatten kürzlich zu einer außergewöhnlichen Zukunftskonferenz nach Los Angeles geladen. Zwei prominente Wissenschaftler, George Church, Molekulargenetiker in Harvard, und Craig Venter, Pionier bei der Entschlüsselung des Humangenoms, sprachen über die synthetische Genomik. Zu der Tagung, zu der nur ein ausgewählter Kreis eingeladen worden war, erschienen rund zwanzig Vertreter der Technologieelite Amerikas, darunter Larry Page, Mitbegründer von Google, Nathan Myhrvold, vormals Chief Technology Officer von Microsoft, und Elon Musk, Gründer von PayPal und Direktor von SpaceX, einem privaten Raketenbau- und Raumforschungsunternehmen, das in einem riesigen Gebäude unweit des Flughafens von Los Angeles untergebracht ist.

Der erste Tag des Treffens fand auf dem Areal von SpaceX statt, wo übrigens auch das Elektroauto Tesla produziert wird. Die synthetische Genomik, Thema der Konferenz, ist im Grunde Gentechnik in großem Stil. Sie beschäftigt sich mit der teilweisen oder vollständigen Ersetzung des natürlichen Genmaterials eines Organismus durch synthetisches Genmaterial.

Das Schreckgespenst der „Biohacker“

Von ihr erwartet man eine Vielzahl von biotechnischen Umbrüchen - beispielsweise Bakterien, die so programmiert werden, dass sie Kohle in Biogas umwandeln, oder Mikroben, die Kerosin produzieren. Mit wieder anderen Verfahren wollen Wissenschaftler ausgestorbene Lebewesen wieder zum Leben erwecken, etwa das Wollhaarmammut, vielleicht sogar den Neandertaler.

Natürlich kam auch das Schreckgespenst der „Biohacker“ zur Sprache, die neue Krankheitserreger produzieren. Aber Genomforscher sind ja fast zwangsläufig Optimisten. George Church, der über sein Spezialgebiet „Humangenetik 2.0“ sprach, erklärte seinen Zuhörern, wie sich das Genmaterial - die DNS - programmieren lässt. So wie Sequenzierungsautomaten die natürliche Ordnung eines DNS-Moleküls entziffern können, so können Automaten Komponenten einer gezielt manipulierten DNS schaffen, die, in eine Zelle eingebaut, deren normales Verhalten verändern. Bösartige Tumoren ziehen beispielsweise viele Bakterienzellen an. Durch präzise Manipulation des Bakteriengenoms kann man krebsbekämpfende Mikroben schaffen, also Organismen, die den Tumor angreifen, indem sie in die Geschwulst eindringen und dort synthetisch erzeugte Toxine freisetzen.

„Personalisierte“ Mäuse

Church und sein Harvard-Team haben inzwischen Bakterien programmiert, die jede dieser Funktionen separat ausüben, aber es ist ihnen noch nicht gelungen, sie zu einem komplett organisierten System zusammenzubauen. Trotzdem: „Wir sind nicht mehr weit von dem Punkt entfernt, wo wir diese Zellen quasi wie Computer programmieren können“, sagte Church.

Tumorkiller-Mikroben sind aber nur eine der vielen wundersamen Entwicklungen in Churchs Laboren. Ein anderes Projekt ist die Aussicht auf „personalisierte“ Mäuse. Es handelt sich um Säugetiere, denen Ausschnitte menschlicher DNS injiziert werden, damit sie Antikörper bilden, die vom jeweiligen Menschen nicht abgestoßen werden. Eine personalisierte, mit individuellem Genmaterial gespickte Maus würde dann Antikörper produzieren, die der entsprechende kranke Mensch nicht mehr abstoßen würde.

Resistent gegenüber konventionellen Enzymen, Parasiten und Erreger

Und was wäre wohl von synthetischen Organismen zu halten, die gegenüber einer ganzen Klasse natürlicher Viren resistent wären? Zwei Verfahren gibt es. Eines besteht darin, eine DNS zu konstruieren, die das Spiegelbild einer natürlichen DNS ist. Wie viele biologische und chemische Substanzen zeichnet sich die DNS durch Chiralität (Händigkeit) aus, das heißt, sie existiert in links- oder rechtshändiger Helixstruktur. In natürlichem Zustand sind die meisten biologischen Moleküle linkshändig. Durch künstliche Schaffung rechtshändiger DNS könnte man aber synthetische Organismen erzeugen, deren DNS das Spiegelbild des Originals ist. Diese wären resistent gegenüber konventionellen Enzymen, Parasiten und Erregern, weil ihre DNS von der Spiegelbildversion nicht erkannt würde. Solche synthetischen Organismen wären Teil einer ganz neuen „Spiegelwelt“.

Church ist außerdem Gründer und Direktor des „Personal Genome Project“. Dessen Ziel ist es, die Genome von hunderttausend Freiwilligen zu sequenzieren und ein Zeitalter der personalisierten Medizin einzuläuten. Anders als die lange übliche Standardkombination aus Pillen und Therapien wird die Medizin künftig passgenau wie ein Maßanzug auf das Individuum zugeschnitten.

„Leben auf anderen Planeten etablieren“

Gegen Ende des ersten Tages stellte Elon Musk, Charismatiker sondergleichen, eine Genomveränderung anderer Art vor. Während im Hintergrund ein Video vom Start seiner „Falcon 1“-Rakete auf dem südpazifischen Kwajalein-Atoll lief, sprach er über die Verpflanzung der Spezies Mensch auf andere Planeten. Dieses Ziel hätte man als unrealistisch abtun können, wäre nicht am 13. Juli, kurz vor der Konferenz, eine „Falcon 1“ gestartet, die den malaysischen Satelliten RazakSat auf eine Erdumlaufbahn brachte. Schon zuvor hatte SpaceX einen Auftrag der Nasa für Versorgungstransporte zur Internationalen Raumstation erhalten.

Wie ein Herrscher seine Untertanen führte Musk dann die Konferenzteilnehmer durch die Fertigungsanlagen seines Raumfahrtunternehmens. Wir sahen den Bereich, in dem das Triebwerk gebaut wird, wir sahen Komponenten der Trägerrakete, das Kontrollzentrum und ein Exemplar des „Dragon“-Raumschiffs, einer Kapsel für den Transport von Fracht oder Menschen zur Raumstation. „All das dient dem Ziel, Leben auf anderen Planeten zu etablieren“, sagte Musk.

Organismen auf ein Minimum an Genen reduzieren

Am zweiten Tag trat J. Craig Venter auf, Pionier des privaten Humangenomprojekts und Gründer von Synthetic Genomics Inc. Das ist eine Organisation, die sich der Vermarktung von Gentechniken verschrieben hat. Eine der Herausforderungen der synthetischen Genomik ist es, Organismen auf ein Minimum an Genen zu reduzieren, die zum Leben notwendig sind. „Reduktionistische Biologie“ nennt Venter das. Die Grundfrage sei, ob sich durch Kombination der geringsten Zahl an lebenswichtigen Genkomponenten neues Leben kreieren lasse.

Venter nutzt Bierhefe, die imstande ist, DNS-Fragmente zu funktionalen Chromosomen zusammenzubauen. Er schilderte ein Experiment, bei dem fünfundzwanzig synthetische DNS-Komponenten erzeugt und in eine Hefezelle eingebracht wurden, die von dieser zu einem Chromosom zusammengefügt wurden. Dabei kam es darauf an, die DNS-Teilchen so zu konstruieren, dass der Organismus sie korrekt zusammensetzen konnte. Venter stellte fest, dass Gene sich in Hefe leicht manipulieren ließen. Er konnte Gene einbringen, entfernen und eine Hefe mit neuen Eigenschaften erzeugen. Im August 2007 veränderte er die Individuen einer Spezies radikal: Er entnahm den Zellen je ein Chromosom, verpflanzte diese und schuf etwas völlig Neues. „Durch Veränderung der Software wurde der alte Organismus völlig eliminiert und ein neuer geschaffen“, so Venter.

„Die Software baut sich ihre eigene Hardware“

Venter und sein Forschungsteam schufen auch eine synthetische DNS-Kopie des PhiX-Bakteriophagen, eines kleinen, für Menschen ungefährlichen Bakterienparasiten. Eingesetzt in ein Kolibakterium, produzierte die Zelle die erforderlichen Proteine und setzte sie zu einem neuen Bakterienvirus zusammen, das seinerseits die Zelle vernichtete, aus der es hervorgegangen war. Und all das, so Venter, sei automatisch in der Zelle passiert: „Die Software baut sich ihre eigene Hardware.“

Diese und andere genomische Kreationen, Transformationen und Zerstörungen führten zu Fragen, wie sicher wir vor dem Albtraum gentechnisch produzierter Bakterien sind, die aus dem Labor entweichen und Unheil in die Welt bringen. Um das zu verhindern, könne man, so Venter, den Organismus mit „Suizidgenen“ versehen - das heißt ihn mit einer chemischen Abhängigkeit ausstatten, so dass er außerhalb des Labors nicht überlebensfähig ist. Außerhalb des Labors würden diese künstlichen Zellen absterben.

„Es ist nicht schwer, Algen dazu zu bringen, Öl zu produzieren“

Wenn dem so ist, wäre das eine gute Nachricht, denn Venter und sein Team sind gegenwärtig dabei, mit Forschungsgeldern von Exxon-Mobil eine fünf bis sieben Quadratkilometer große Algenfarm einzurichten, in der umprogrammierte Algen Biokraftstoff produzieren werden. „Es ist nicht schwer, Algen dazu zu bringen, Öl zu produzieren“, sagte Venter. „Die Menge ist das Problem.“ Um als praktikable Energielieferanten dienen zu können, müssen Algenfarmen groß sein, und das macht sie teuer. Algen haben jedoch den Vorteil, dass sie Kohlendioxid verbrauchen und Sonnenlicht als Energiequelle verwenden. Potentiell haben wir also lebende Solarzellen, die Kohlendioxid fressen und dabei neue Kohlenwasserstoffe - den Treibstoff - produzieren.

Das letzte Wort hatte Church mit seinem Projekt „Engineering Humans 2.0“. Der Mensch, meinte er, sei in vielerlei Hinsicht beschränkt: in seiner Konzentrations- und Erinnerungsfähigkeit, durch die Kürze des Lebens und so weiter. Mit Hilfe der Gentechnik könnten all diese Unzulänglichkeiten und Einschränkungen korrigiert werden. Die gemeine Labormaus habe eine Lebensdauer von zweieinhalb Jahren, der Nacktmull dagegen erreicht das hohe Alter von fünfundzwanzig Jahren. Es sei möglich, diejenigen Gene zu finden, die zur Langlebigkeit des Nacktmulls beitragen, und wenn man diese Gene in die Labormaus einsetze, könne man deren Lebensdauer allmählich steigern.

„Warum sollte man Neandertaler wiederbeleben wollen?“

Analog könne man bei Menschen verfahren, also die Lebensdauer verlängern und das Gedächtnis verbessern, aber es frage sich, ob das klug wäre. Es gebe immer Nachteile, sagte Church. Man kann es einrichten, dass die Menschen größere und kräftigere Knochen haben, aber nur um den Preis, dass sie massiger und plumper werden. Auf einer Konferenz, in der über grenzenlose Machbarkeit gesprochen wurde, waren diese Worte eine willkommene Mahnung.

Doch dann erklärte er, dass es vermutlich möglich sei, durch gezielte Manipulation des Elefantengenoms dem Wollhaarmammut zu neuer Existenz zu verhelfen. Und durch ähnliche Manipulation des Schimpansengenoms könne man möglicherweise den Neandertaler wieder zum Leben bringen.

„Warum sollte man Neandertaler wiederbeleben wollen?“, fragte ein Gast.

„Um einen Verwandten zu schaffen, der uns einen neuen Blick auf uns ermöglicht“, antwortete Church. Der Mensch sei eine Monokultur, und Monokulturen seien eben biologisch gefährdet. Seine Antwort überzeugte nicht alle Anwesenden. „Wir haben schon genug Neandertaler in Washington“, rief Craig Venter, und mit dieser Bemerkung ging die Konferenz zu Ende.

Aus dem Englischen von Matthias Fienbork.

Von Ed Regis erschien zuletzt „What is Life? Investigating the Nature of Life in the Age of Synthetic Biology“ bei Oxford University Press.

Pages