About
Features
Editions
Press
Events
Dinner
Question Center
Subscribe

We are apparently now in a situation where modern technology is changing the way people behave, people talk, people react, people think, and people remember. And you encounter this not only in a theoretical way, but when you meet people, when suddenly people start forgetting things, when suddenly people depend on their gadgets, and other stuff, to remember certain things. This is the beginning, its just an experience. But if you think about it and you think about your own behavior, you suddenly realize that something fundamental is going on. There is one comment on Edge which I love, which is in Daniel Dennett's response to the 2007 annual question, in which he said that we have a population explosion of ideas, but not enough brains to cover them.

THE AGE OF THE INFORMAVORE (*) [10.27.09]
A Talk With Frank Schirrmacher




[30:27 minutes]

(*The term informavore characterizes an organism that consumes information. It is meant to be a description of human behavior in modern information society, in comparison to omnivore, as a description of humans consuming food. )

THE REALITY CLUB: Daniel Kahneman, George Dyson, Jaron Lanier, Nick Bilton, Nick Carr, Douglas Rushkoff, Jesse Dylan, Virginia Heffernan, Gerd Gigerenzer, John Perry Barlow, Steven Pinker, John Bargh, George Dyson, Annalena McAfee, John Brockman, David Gelernter, Evgeny Morozov

INTRODUCTION

The most significant intellectual development of the first decade of the 21st Century is that concepts of information and computation have infiltrated a wide range of sciences, from physics and cosmology, to cognitive psychology, to evolutionary biology, to genetic engineering. Such innovations as the binary code, the bit, and the algorithm have been applied in ways that reach far beyond the programming of computers, and are being used to understand such mysteries as the origins of the universe, the operation of the human body, and the working of the mind.

Enter Frank Schirrmacher, Editorial Director the editorial staff of the FAZ Feuilleton, a supplement of the FAZ on the arts and sciences. He is also one of the five publishers of the newspaper, responsible for the Feuilleton, and he has actively expanded science coverage in this section. He has been referred to as Germany's "Culture Czar", which may seem over the top, but his cultural influence is undeniable. He can, and does, begin national discussions on topics and ideas that interest him, such as genomic research, neuroscience, aging, and, in this regard, he has the ability to reshape the national consciousness.

I can provide a first-hand account of "the Schirrmacher treatment".

In May of 2000, he published a manifesto in FAZ, a call-to arms,entitled "Wake-Up Call for Europe Tech", in which he called for Europe to adopt the ideas of the third culture. His goal: to change the culture of the newspaper and to begin a process of change in Germany and Europe. "Europe should be more than just a source for the software of ego crisis, loss of identity, despair, and Western melancholy," he wrote. We should be helping write the code for tomorrow."

The Manifesto, and Schirrmacher's publishing program, was a departure for FAZ which has a somewhat conservative profile, and it was widely covered in the German press and made waves in intellectual circles. And a decade later, the national conversation continues. (See this week's Stuttargarter Zeitung).

Within weeks following publication of his manifesto, Schirrmacher began publishing articles by notable third culture thinkers such as Bill Joy, Ray Kurzweil, V.S. Ramachandran, Patrick Bateson, James Watson, Craig Venter, David Gelernter, among others. Soon after, he devoted an entire edition of the Feuilleton to a printout the Human Genome code published by Craig Venter, which caused a sensation in Germany.

Then came 9/11. And everything changed. Schirrmacher was on to the next story.

I hadn't heard from him in a while when in July he emailed me from Vietnam. He was thinking about, and researching, ideas concerning the effect of new information technologies on human knowledge, on how the Internet is modifying our cognitive structures, on how we can begin the understand the cultural changes happening in today's technology/knowledge interface. He asked for my help regarding questions he had for George Dyson, Danny Hillis, Richard Thaler, and Larry Page.

With Schirrmacher, once he's obsessed with an idea to which you may happen to be connected, expect to be kept very busy. Dozens of emails went back and forth until, finally, I caught up with him in person in October for a conversation in his Frankfurt office.

So, what are the questions Schirrmacher is asking himself?

He is interested in George Dyson's comment "What if the price of machines that think is people who don't?" He is looking at how the modification of our cognitive structures is a process that eventually blends machines and humans in a deeper way, more than any human-computer interface could possibly achieve. He's also fascinated in an idea presented a decade ago by Danny Hillis: "In the long run, the Internet will arrive at a much richer infrastructure, in which ideas can potentially evolve outside of human minds."

We discussed his notion that computer platforms can be seen as socio-biological systems which repeat three of the major concepts of the 19th century on an individual level: Taylorism (multitasking), Marxism (free content and copyright) and Darwinism (search algorithm and information foraging). "The Darwinian perspective is the most interesting," he says. "Information being an advantage for the informarvores and software that codes it with cues from foraging habits of the prehistoric man".

JB

[ED. NOTE: The conversation was in English, Schirrmacher's second language. Rather than edit the piece for grammar, and risk losing the spontaneity of the conversation, I present it here — for the most part — verbatim.]

FRANK SCHIRRMACHER is a an influential German journalist, essayist, best-selling author, and since 1994 co-publisher of the leading national German newspaper Frankfurter Allgemeine Zeitung (FAZ), where he is Editor of the Feuilleton, cultural and science pages of the paper. He is the author of the Das Methusalem-Komplott (The Methusaleh Conspiracy), a book, published in 14 languages selling more than one million copies in Germany, on that country's aging society; and Payback: Warum wir im Informationszeitalter gezwungen sind zu tun, was wir nicht tun wollen, und wie wir die Kontrolle über unser Denken zurückgewinnen (Payback: Why in the Information Age we are forced to do what we do not want to do and how we can recover control over our thinking, November, Karl Blessing Verlag).

Frank Schirrmacher's Edge Bio Page


THE AGE OF THE INFORMAVORE

[FRANK SCHIRRMACHER:] The question I am asking myself arose through work and through discussion with other people, and especially watching other people, watching them act and behave and talk, was how technology, the Internet and the modern systems, has now apparently changed human behavior, the way humans express themselves, and the way humans think in real life. So I've profited a lot from Edge.

We are apparently now in a situation where modern technology is changing the way people behave, people talk, people react, people think, and people remember. And you encounter this not only in a theoretical way, but when you meet people, when suddenly people start forgetting things, when suddenly people depend on their gadgets, and other stuff, to remember certain things. This is the beginning, its just an experience. But if you think about it and you think about your own behavior, you suddenly realize that something fundamental is going on. There is one comment on Edge which I love, which is in Daniel Dennett's response to the 2007 annual question, in which he said that we have a population explosion of ideas, but not enough brains to cover them.

As we know, information is fed by attention, so we have not enough attention, not enough food for all this information. And, as we know — this is the old Darwinian thought, the moment when Darwin started reading Malthus — when you have a conflict between a population explosion and not enough food, then Darwinian selection starts. And Darwinian systems start to change situations. And so what interests me is that we are, because we have the Internet, now entering a phase where Darwinian structures, where Darwinian dynamics, Darwinian selection, apparently attacks ideas themselves: what to remember, what not to remember, which idea is stronger, which idea is weaker.

Here European thought is quite interesting, our whole history of thought, especially in the eighteenth, nineteenth, and twentieth centuries, starting from Kant to Nietzsche. Hegel for example, in the nineteenth century, where you said which thought, which thinking succeeds and which one doesn't. We have phases in the nineteenth century, where you could have chosen either way. You could have gone the way of Schelling, for example, the German philosopher, which was totally different to that of Hegel. And so this question of what survives, which idea survives, and which idea drowns, which idea starves to death, is something which, in our whole system of thought, is very, very known, and is quite an issue. And now we encounter this structure, this phenomenon, in everyday thinking.

It's the question: what is important, what is not important, what is important to know? Is this information important? Can we still decide what is important? And it starts with this absolutely normal, everyday news. But now you encounter, at least in Europe, a lot of people who think, what in my life is important, what isn't important, what is the information of my life. And some of them say, well, it's in Facebook. And others say, well, it's on my blog. And, apparently, for many people it's very hard to say it's somewhere in my life, in my lived life.


Gerd Gigerenzer, to whom I talked and whom I find a fascinating thinker, notes that thinking itself somehow leaves the brain and uses a platform outside of the human body. And that, of course, is the Internet and it's the cloud. Very soon we will have the brain in the cloud. And the raises the question about the importance of thoughts. For centuries, what was important for me was decided in my brain. But now, apparently, it will be decided somewhere else.


Of course, everybody knows we have a revolution, but we are now really entering the cognitive revolution of it all. In Europe, and in America too — and it's not by chance — we have a crisis of all the systems that somehow are linked to either thinking or to knowledge. It's the publishing companies, it's the newspapers, it's the media, it's TV. But it's as well the university, and the whole school system, where it is not a normal crisis of too few teachers, too many pupils, or whatever; too small universities; too big universities.

Now, it's totally different. When you follow the discussions, there's the question of what to teach, what to learn, and how to learn. Even for universities and schools, suddenly they are confronted with the question how can we teach? What is the brain actually taking? Or the problems which we have with attention deficit and all that, which are reflections and, of course, results, in a way, of the technical revolution?

Gerd Gigerenzer, to whom I talked and who I find a fascinating thinker, put it in such a way that thinking itself somehow leaves the brain and uses a platform outside of the human body. And that's the Internet and it's the cloud. And very soon we will have the brain in the cloud. And this raises the question of the importance of thoughts. For centuries, what was important for me was decided in my brain. But now, apparently, it will be decided somewhere else.

The European point of view, with our history of thought, and all our idealistic tendencies, is that now you can see — because they didn't know that the Internet would be coming, in the fifties or sixties or seventies — that the whole idea of the Internet somehow was built in the brains, years and decades before it actually was there, in all the different sciences. And when you see how the computer — Gigerenzer wrote a great essay about that — how the computer at first was somehow isolated, it was in the military, in big laboratories, and so on. And then the moment the computer, in the seventies and then of course in the eighties, was spread around, and every doctor, every household had a computer, suddenly the metaphors that were built in the fifties, sixties, seventies, then had their triumph. And so people had to use the computer. As they say, the computer is the last metaphor for the human brain; we don't need any more. It succeeded because the tool shaped the thought when it was there, but all the thinking, like in brain sciences and all the others, had already happened, in the sixties, seventies, fifties even.

But the interesting question is, of course, the Internet — I don't know if they really expected the Internet to evolve the way it did — I read books from the nineties, where they still don't really know that it would be as huge as it is. And, of course, nobody predicted Google at that time. And nobody predicted the Web.

Now, what I find interesting is that if you see the computer and the Web, and all this, under the heading of "the new technologies," we have, in the late nineteenth century, this big discussion about the human motor. The new machines in the late nineteenth century required that the muscles of the human being should be adapted to the new machines. Especially in Austria and Germany, we have this new thinking, where people said, first of all, we have to change muscles. The term "calories" was invented in the late nineteenth century, in order to optimize the human work force.

Now, in the twenty-first century, you have all the same issues, but now with the brain, what was the adaptation of muscles to the machines, now under the heading of multitasking — which is quite a problematic issue. The human muscle in the head, the brain, has to adapt. And, as we know from just very recent studies, it's very hard for the brain to adapt to multitasking, which is only one issue. And again with calories and all that. I think it's very interesting, the concept — again, Daniel Dennett and others said it — the concept of the informavores, the human being as somebody eating information. So you can, in a way, see that the Internet and that the information overload we are faced with at this very moment has a lot to do with food chains, has a lot to do with food you take or not to take, with food which has many calories and doesn't do you any good, and with food that is very healthy and is good for you.

The tool is not only a tool, it shapes the human who uses it. We always have the concept, first you have the theory, then you build the tool, and then you use the tool. But the tool itself is powerful enough to change the human being. God as the clockmaker, I think you said. Then in the Darwinian times, God was an engineer. And now He, of course, is the computer scientist and a programmer. What is interesting, of course, is that the moment neuroscientists and others used the computer, the tool of the computer, to analyze human thinking, something new started.

The idea that thinking itself can be conceived in technical terms is quite new. Even in the thirties, of course, you had all these metaphors for the human body, even for the brain; but, for thinking itself, this was very, very late. Even in the sixties, it was very hard to say that thinking is like a computer.

You had once in Edge, years ago, a very interesting talk with Patty Maes on "Intelligence Augmentation" when she was one of the first who invented these intelligent agents. And there, you and Jaron Lanier, and others, asked the question about the concept of free will. And she explained it and it wasn't that big an issue, of course, because it was just intelligent agents like the ones we know from Amazon and others. But now, entering real-time Internet and all the other possibilities in the near future, the question of predictive search and others, of determinism, becomes much more interesting. The question of free will, which always was a kind of theoretical question — even very advanced people said, well, we declare there is no such thing as free will, but we admit that people, during their chidhood, will have been culturally programmed so they believe in free will.


The question of prediction will be the issue of the future and such questions will have impact on the concept of free will. We are now confronted with theories by psychologist John Bargh and others who claim there is no such thing as free will. This kind of claim is a very big issue here in Germany and it will be a much more important issue in the future than we think today. The way we predict our own life, the way we are predicted by others, through the cloud, through the way we are linked to the Internet, will be matters that impact every aspect of our lives.


But now, when you have a generation — in the next evolutionary stages, the child of today — which are adapted to systems such as the iTunes "Genius", which not only know which book or which music file they like, and which goes farther and farther in predictive certain things, like predicting whether the concert I am watching tonight is good or bad. Google will know it beforehand, because they know how people talk about it.

What will this mean for the question of free will? Because, in the bottom line, there are, of course, algorithms, who analyze or who calculate certain predictabilities. And I'm wondering if the comfort of free will or not free will would be a very, very tough issue of the future. At this very moment, we have a new government in Germany; they are just discussing the what kind of effect this will have on politics. And one of the issues, which of course at this very moment seems to be very isolated, is the question how to predict certain terroristic activities, which they could use, from blogs — as you know, in America, you have the same thing. But this can go farther and farther.

The question of prediction will be the issue of the future and such questions will have impact on the concept of free will. We are now confronted with theories by psychologist John Bargh and others who claim there is no such thing as free will. This kind of claim is a very big issue here in Germany and it will be a much more important issue in the future than we think today. The way we predict our own life, the way we are predicted by others, through the cloud, through the way we are linked to the Internet, will be matters that impact every aspect of our lives. And, of course, this will play out in the work force — the new German government seems to be very keen on this issue, to at least prevent the worst impact on people, on workplaces.

It's very important to stress that we are not talking about cultural pessimism. What we are talking about is that a new technology which is in fact a technology which is a brain technology, to put it this way, which is a technology which has to do with intelligence, which has to do with thinking, that this new technology now clashes in a very real way with the history of thought in the European way of thinking.

Unlike America, as you might know, in Germany we had a party for the first time in the last elections which totally comes out of the Internet. They are called The Pirates. In their beginning they were computer scientists concerned with questions of copyright and all that. But it's now much, much more. In the recent election, out of the blue, they received two percent of the votes, which is a lot for a new party which only exists on the Internet. And the voters were mainly 30, 40, 50 percent young males. Many, many young males. They're all very keen on new technologies. Of course, they are computer kids and all that. But this party, now, for the first time, reflects the way which we know, theoretically, in a very pragmatic and political way. For example, one of the main issues, as I just described, the question of the adaptation of muscles to modern systems, either in the brain or in the body, is a question of the digital Taylorism.

As far as we can see, I would say, we have three important concepts of the nineteenth century, which somehow come back in a very personalized way, just like you have a personalized newspaper. This is Darwinism, the whole question. And, in a very real sense, look at the problem with Google and the newspapers. Darwinism, but as well the whole question of who survives in the net, in the thinking; who gets more traffic; who gets less traffic, and so. And then you have the concept of communism, which comes back to the question of free, the question that people work for free. And not only those people who sit at home and write blogs, but also many people in publishing companies, newspapers, do a lot of things for free or offer them for free. And then, third, of course, Taylorism, which is a non-issue, but we now have the digital Taylorism, but with an interesting switch. At least in the nineteenth century and the early twentieth century, you could still make others responsible for your own deficits in that you could say, well, this is just really terrible, it's exhausting, and it's not human, and so on.

Now, look at the concept, for example, of multitasking, which is a real problem for the brain. You don't think that others are responsible for it, but you meet many people who say, well, I am not really good at it, and it's my problem, and I forget, and I am just overloaded by information. What I find interesting that three huge political concepts of the nineteenth century come back in a totally personalized way, and that we now, for the first time, have a political party — a small political party, but it will in fact influence the other parties — who address this issue, again, in this personalized way.

It's a kind of catharsis, this Twittering, and so on. But now, of course, this kind of information conflicts with many other kinds of information. And, in a way, one could argue — I know that was the case with Iran — that maybe the future will be that the Twitter information about an uproar in Iran competes with the Twitter information of Ashton Kutcher, or Paris Hilton, and so on. The question is to understand which is important. What is important, what is not important is something very linear, it's something which needs time, at least the structure of time. Now, you have simultaneity, you have everything happening in real time. And this impacts politics in a way which might be considered for the good, but also for the bad.

Because suddenly it's gone again. And the next piece of information, and the next piece of information — and if now — and this is something which, again, has very much to do with the concept of the European self, to take oneself seriously, and so on — now, as Google puts it, they say, if I understand it rightly, in all these webcams and cell phones — are full of information. There are photos, there are videos, whatever. And they all should be, if people want it, shared. And all the thoughts expressed in any university, at this very moment, there could be thoughts we really should know. I mean, in the nineteenth century, it was not possible. But maybe there is one student who is much better than any of the thinkers we know. So we will have an overload of all these information, and we will be dependent on systems that calculate, that make the selection of this information.

And, as far as I can see, political information somehow isn't distinct from it. It's the same issue. It's a question of whether I have information from my family on the iPhone, or whether I have information about our new government. And so this incredible amount of information somehow becomes equal, and very, very personalized. And you have personalized newspapers. This will be a huge problem for politicians. From what I hear, they are now very interested in, for example, Google's page rank; in the question how, with mathematical systems, you can, for example, create information cascades as a kind of artificial information overload. And, as you know, you can do this. And we are just not prepared for that. It's not too early. In the last elections we, for the first time, had blogs, where you could see they started to create information cascades, not only with human beings, but as well with BOTs and other stuff. And this is, as I say, only the beginning.


Germany still has a very strong anti-technology movement, which is quite interesting insofar as you can't really say it's left-wing or right-wing. As you know, very right-wing people, in German history especially, were very anti-technology. But it changed a lot. And why it took so long, I would say, has demographic reasons. As we are in an aging society, and the generation which is now 40 or 50, in Germany, had their children very late. The whole evolutionary change, through the new generation — first, they are fewer, and then they came later. It's not like in the sixties, seventies, with Warhol. And the fifties. These were young societies. It happened very fast. We took over all these interesting influences from America, very, very fast, because we were a young society. Now, somehow it really took a longer time, but now that is for sure we are entering, for demographic reasons, the situation where a new generation which is — as you see with The Pirates as a party — they're a new generation, which grew up with modern systems, with modern technology. They are now taking the stage and changing society.

What did Shakespeare, and Kafka, and all these great writers — what actually did they do? They translated society into literature. And of course, at that stage, society was something very real, something which you could see. And they translated modernization into literature. Now we have to find people who translate what happens on the level of software. At least for newspapers, we should have sections that review software in a different way, at least the structures of software.


What did Shakespeare, and Kafka, and all these great writers — what actually did they do? They translated society into literature. And of course, at that stage, society was something very real, something which you could see. And they translated modernization into literature. And now we have to find people who translate what happens on the level of software. At least for newspapers, we should have sections reviewing software in a different way; at least the structures of software.

One must say, all the big companies are American companies, except SAP. But Google and all these others, they are American companies. I would say we weren't very good at inventing. We are not very good at getting people to study computer science and other things. And I must say — and this is not meant as flattery of America, or Edge, or you, or whosoever — what I really miss is that we don't have this type of computationally-minded intellectual — though it started in Germany once, decades ago — such as Danny Hillis and other people who participate in a kind of intellectual discussion, even if only a happy few read and react to it. Not many German thinkers have adopted this kind of computational perspective.

The ones who do exist have their own platform and actually created a new party. This is something we are missing, because there has always been a kind of an attitude of arrogance towards technology. For example, I am responsible for the entire cultural sections and science sections of FAZ. And we published reviews about all these wonderful books on science and technology, and that's fascinating and that's good. But, in a way, the really important texts, which somehow write our life today and which are, in a way, the stories of our life — are, of course, the software — and these texts weren't reviewed. We should have found ways of transcribing what happens on the software level much earlier — like Patty Maes or others, just to write it, to rewrite it in a way that people understand what it actually means. I think this is a big lack.

What did Shakespeare, and Kafka, and all these great writers — what actually did they do? They translated society into literature. And of course, at that stage, society was something very real, something which you could see. And they translated modernization into literature. And now we have to find people who translate what happens on the level of software. At least for newspapers, we should have sections reviewing software in a different way; at least the structures of software.

We are just beginning to look at this in Germany. And we are looking for people — it's not very many people — who have the ability to translate that. It needs to be done because that's what makes us who we are. You will never really understand in detail how Google works because you don't have access to the code. They don't give you the information. But just think of George Dyson's essay, which I love, "Turing's Cathedral." This is a very good beginning. He absolutely has the point. It is today's version of the kind of cathedral we would be entering if we lived in the eleventh century. It's incredible that people are building this cathedral of the digital age. And as he points out, when he visited Google, he saw all the books they were scanning, and noted that they said they are not scanning these books for humans to read, but for the artificial intelligence to read.


Who are the big thinkers here? In Germany, for me at least, for my work, there are a couple of key figures. One of them is Gerd Gigerenzer, who is somebody who is absolutely — I would say he is actually avant-garde, at this very moment, because what he does is he teaches heuristics. And from what we see, we have an amputation of heuristics, through the technologies, as well. People forget certain heuristics. It starts with a calculation, because you have the calculator, but it goes much further. And you will lose many more rules of thumb in the future because the systems are doing that, Google and all the others. So Gigerenzer, in his thinking — and he has a big Institute now — on risk assessment, as well, is very, very important. You could link him, in a way, actually to Nassim Taleb, because again here you have the whole question of not risk assessment, the question of looking back, looking into the future, and all that.

Very important in literature, still, though he is 70 years old, 80 years old, is of course Hans Magnus Enzensberger. Peter Sloterdijk is a very important philosopher; a kind of literary figure, but he is important. But then you have, not unlike in the nineteenth or twentieth century, there are many leading figures. But I must say, as well as Gigerenzer, he writes all his books in English, we have quite interesting people, at this very moment, in law, which is very important for discussions of copyright and all that. But regarding the conversations of new technologies and human thought, they, at this very moment, don't really take place in Germany.

There are European thinkers who have cult followings — Slajov Zizek, for example. Ask any intellectual in Germany, and they will tell you Zizek is just the greatest. He's a kind of communist, but he considers himself Stalinistic, even. But this is, of course, all labels. Wild thinkers. Europeans, at this very moment, love wild thinkers.

__

Also by Frank Schirrmacher on Edge:

"Wake-Up Call for Europe Tech" by Frank Schirrmacher

Related Reading on Edge:

The Simplifier": A Conversation with John A. Bargh
"The Evaporation of the Powerful Mystique of Religion" by Daniel Dennett
"Turing's Cathedral" by George Dyson
"The Second Coming: A Manisfesto" by David Gelernter
"Lord of the Cloud: John Markoff and Clay Shirky talk to David Gelernter"
"Smart Heuristics": A Talk with Gerd Gigerenzer"

"Aristotle: The Knowledge Web" by W. Daniel Hillis
The Genius: W. Daniel Hillis. Chapter 13 in Digerati
"One-Half A Manifesto" by Jaron Lanier
"Intelligence Augmentation": A Talk with Patty Maes


18.11.2009

Frank Schirrmacher: Payback

DIE ICH-ERSCHÖPFUNG
Von Andrian Kreye

[Google Translation]

Wenn der Kopf im Internet nicht mehr mitkommt: Frank Schirrmachers Buch "Payback" bringt die digitale Debatte zwar auf den neuesten Stand, aber nicht weiter.

Dabei ist Frank Schirrmacher keineswegs ein digitaler Außenseiter. Im Onlinemagazin Edge.org begegnet er digitalen Vordenkern wie George Dyson, Jaron Lanier and David Gelernter auf Augenhöhe. Man merkt auch seinem Text an, dass ihm die Grenzen der linearen Erzählform längst zu eng geworden sind, dass Klammern, Einschübe und Fußnoten die Thesengebäude gerade noch zusammenhalten können, bevor die vernetzten Gedanken die Buchform sprengen.

[...]



November 15, 2009

[...]



DAVID PESCOVITZ
TECHNOLOGY

THE AGE OF THE INFORMAVORE

We make technology, but our technology also makes us. At the online science/culture journal Edge, BB pal John Brockman went deep — very deep — into this concept. Frank Schirrmacher is co-publisher of the national German newspaper FAZ and a very, very big thinker. Schirrmacher has raised public awareness and discussion about some of the most controversial topics in science research today, from genetic engineering to the aging population to the impacts of neuroscience. At Edge, Schirrmacher riffs on the notion of the "informavore," an organism that devours information like it's food. After posting Schirrmacher's thoughts, Brockman invited other bright folks to respond, including the likes of George Dyson, Steven Pinker, John Perry Barlow, Doug Rushkoff, and Nick Bilton. Here's a taste of Schirrmacher, from "The Age of the Infomavore" [...]



THE REALITY CLUB: On "The Age of the Informavore": A Talk with Frank Schirrmacher

Daniel Kahneman, George Dyson, Jaron Lanier, Nick Bilton, Nick Carr, Douglas Rushkoff, Jesse Dylan, Virginia Heffernan, Gerd Gigerenzer, John Perry Barlow, Steven Pinker, John Bargh, George Dyson, Annalena McAfee, John Brockman, David Gelernter, Evgeny Morozov


EVGENY MOROZOV: In a sense, dealing with personal failure — of any kind, whether real or imaginary — was much easier before social networking exploded: 5 or 10 or 15 years after college, your former pals may all be having nicer jobs and perks than you do — but that humiliating realization only happened once a year (if at all), usually at alumni reunions. Today we are constantly bombarded with new information about others, which plants more and more seeds of self-doubt deep into us. Self-denial — which is essential for letting us cope with the past — is no longer an option: all the evidence stares us in the face from our Facebook walls. [...]

DAVID GELERNTER: This is an observation about the invention of writing, not about "modern technology." "Suddenly people depend on their gadgets, and other stuff, to remember certain things." What stuff? Phonebooks? Calendars on paper? Clay tablets with cuneiform inscriptions? "Suddenly people start forgetting things." When did they ever not forget things? Who was the first man to forget his wedding anniversary? [...]

JOHN BROCKMAN: What exactly is "the cybernetic idea"? Well, it's not to be confused with the discipline of cybernetics, which hit a wall, and stopped evolving during the 1950s. And it's not your usual kind of idea. The cybernetic idea is an invention. A very big invention. The late evolutionary biologist Gregory Bateson called it the most important idea since the idea of Jesus Christ. [...]

ANNALENA MCAFEE: Unlike your best friend, or the long-vanished bookstore owner, or the former manager of the defunct record shop — all of whom made a number of unintentionally insulting errors of taste — these predictive programs get it right 90 per cent of the time. I am willing to trade my free will — surely already compromised by my birthplace, my parents' religion and circumstances, my genetic inheritance — for these time-saving and life-enriching programs. [...]

GEORGE DYSON: First we had digital representations of existing ideas. Then we had digital expressions of new, previously unrepresented ideas. And now we have network processes (including human collaboration) that might actually be ideas. [...]

JOHN BARGH: The discovery of the pervasiveness of situational priming influences for all of the higher mental processes in humans does say something fundamentally new about human nature (for example, how tightly tied and responsive is our functioning to our particular physical and social surroundings).  It removes consciousness or free will as the bottleneck that exclusively generates choices and behavioral impulses, replacing it with the physical and social world itself as the source of these impulses. [...]

STEVEN PINKER: I would suggest another way to look at the effects of technology on our collective intelligence. Take the intellectual values that are timeless and indisputable: objectivity, truth, factual discovery, soundness of argument, insight, explanatory depth, openness to challenging ideas, scrutiny of received dogma, overturning of myth and superstition. Now ask, are new technologies enhancing or undermining those values? [...]

JOHN PERRY BARLOW: I have always wanted to convey to every human being the Right to Know — the protected technical means to fulfill all curiosities with the best answers human beings had yet derived — but the Ability to Know (Everything) is a capacity we don't and won't possess individually. [...]

GERD GIGERENZER: We might think of mentality and technology as two sides of the same coin, as a system in which knowledge, skills, and values are distributed.  This requires a new type of psychology that goes beyond the individual and studies the dynamics of human adaptation to the very tools humans create. [...]

VIRGINIA HEFFERNAN: ... there is a great deal of anxiety, irritation, unease and impatience in Internet use. There is even some self-loathing. What am I doing on the Web—when I used to read books bound in Moroccan leather; stroll in the sunshine; spend hours in focused contemplation of Hegel or Coleridge? [...]

JESSE DYLAN: How the human brain must adapt to the modern era and where those changes will take us are a mystery. What knowledge will a person need in the future when information is ubiquitous and all around us? Will Predictive technologies do away with free will. Google will be able to predict wether you are enjoying the Neil Young concert you are attending before you yourself know. Science fiction becomes reality. [...]

DOUGLAS RUSHKOFF: We continue to build and accept new technologies into our lives with little or no understanding of how these devices have been programmed. We do not know how to program our computers. We spend much more time and energy trying to figure out how to program one another, instead. And this is potentially a grave mistake. [...]

NICHOLAS CARR: "Importance is individualism," says Nick Bilton, reassuringly. We'll create and consume whatever information makes us happy, fulfills us, and leave the rest by the wayside. Maybe. Or maybe we'll school like fish in the Web's algorithmic currents, little Nemos, each of us convinced we're going our own way because, well, we never stop talking, never stop sharing the minutiae of our lives and thoughts. Look at me! Am I not an individual? [...]

NICK BILTON: The new generation, born connected, does not feel the need to consume all the information available at their fingertips. They consume what they want and then affect or change it, they add to it or negate it, they share it and then swiftly move along the path. They rely on their community, their swarm, to filter and share information and in turn they do the same; it's a communism of content. True ideology at it's best. [...]

JARON LANIER: To continue to perceive almost supernatural powers in the Internet (an ascendant perception, as Schirrmacher accurately reports) is to cede the future to reactive religious fanatics. [...]

GEORGE DYSON: When you are an informavore drowning in digital data, analog looks good. [...]

DANIEL KAHNEMAN: The link with Bargh is also interesting, because John pushes the idea that we are driven from the outside and controlled by a multitude of cues of which we are only vaguely aware — we are bathing in primes. [...]


EVGENY MOROZOV
Commentator on Internet and politics "Net Effect" blog; Contributing editor, Foreign Policy

I share many of Schirrmacher's concerns, and I think we need to start looking beyond technology's impact on cognition and behavior and broaden this discussion to its impact on character and identity. This would allow us to acknowledge many of the undeniable positive developments — yes, there is a wealth of great data on Wikipedia and the sheer amount of human knowledge available for free at the moment, especially to those who have never had access to any knowledge, is staggering. However, it's not the availability of this data that matters most: it's how we get to consume it, what we learn (or not) from the process, and how the very availability of this data changes how we think about ourselves.

Watching how young people — especially those who are in their early teens — go about constructing their digital identities out of the scraps and pieces they find online is particularly insightful. Of course, songs, books, movies — these have always been the Lego pieces that teenagers relied on to define who they are. Today, however, they have a never-ending stock of these Lego pieces to choose form and all their peers are obsessively watching this extremely painful process in real time. In my opinion, the impact of social networking is particularly crucial here. For modern teenagers, their Facebook walls have become more representative of what they are and what they want to be than their bedroom walls. The only difference is that we don't expect strangers to break into our bedrooms and spray offensive graffiti on the walls — but this happens on Facebook all the time. Some of it, by all means, is great, as it exposes these young people to the dangers of this world early on, but some of it could also be quite traumatic.

We simply do not know how exactly the young people would cope with fashioning and refashioning their online identities in an online environment where traditional signifiers of "cool" — music, videos, books — are available for immediate download for next to nothing — and arrive with embedded metadata that tells their owners how many of their friends have already bought and played a particular album or a movie. It's a fair guess that the ultimate stage of the real-time Web would be real-time trends, where it would be possible to track the cultural zeitgeist on the fly. This would mark the end of the era of weekly music charts. But how do you stay "ahead of the curve" if the cultural curve is always changing, your own cultural "cool" sensors are always on, and the likes of Facebook continue to feed you with information about what your friends are currently consuming in the background? As more and more of our media consumption happens in public — through Last.fm or NetFlix and whatever social features that Amazon will build on top of Kindle — one unfortunate consequence might be an increase in the overall degree of cultural conformism, despite the immense cultural treasures hidden in the long tail.

The real tragedy of the "age of informavore" thus may lie not in the fact that we are consuming too much information, but that this consumption — at least when done for reputational purposes, which it almost always is for the young — may turn us into extremely placid and conforming creatures. Will this be a society where democracy — powered by actively engaged citizenry — can survive? I can easily relate to Schirrmacher's general uneasiness here — we in Europe have too much painful history behind us and are very reluctant to take any more chances here.

As for the Internet's impact on character, I think that we'll begin seeing the real downside of this new digital environment in 5-10 years when Facebook-dependant "digital natives" who are now finishing college will mature and become parents and professionals. The problem with so many permanent online connections is that their very presence breeds endless and unhealthy self-retrospection — if not outright professional and personal anxieties. Come to think of it, the Internet — and especially Twitter and Facebook — has made us fully aware of what the path not taken looks like, many times over. What really concerns me here is that these shallow, weak, but permanent connections to the friends of our "younger selves" — from high-school, from college — perhaps, best captured by the one-sentence status updates on Facebook — are going to be very burdensome.

In a sense, dealing with personal failure — of any kind, whether real or imaginary — was much easier before social networking exploded: 5 or 10 or 15 years after college, your former pals may all be having nicer jobs and perks than you do — but that humiliating realization only happened once a year (if at all), usually at alumni reunions. Today we are constantly bombarded with new information about others, which plants more and more seeds of self-doubt deep into us. Self-denial — which is essential for letting us cope with the past — is no longer an option: all the evidence stares us in the face from our Facebook walls. Infinite storage and permanent human connections have put so many pressures on the human psyche that the heaviness of our past may one day simply crush us. Letting go, leaving our past behind, moving on — will "digital natives" still know how to do any of that without becoming "digital refuseniks"?


DAVID GELERNTER
Computer Scientist, Yale University; Chief Scientist, Mirror Worlds Technologies; Author, Drawing Life


We are apparently now in a situation where modern technology is changing the way people behave, people talk, people react, people think, and people remember. And you encounter this not only in a theoretical way, but when you meet people, when suddenly people start forgetting things, when suddenly people depend on their gadgets, and other stuff, to remember certain things.

This is an observation about the invention of writing, not about "modern technology." "Suddenly people depend on their gadgets, and other stuff, to remember certain things." What stuff? Phonebooks? Calendars on paper? Clay tablets with cuneiform inscriptions? "Suddenly people start forgetting things." When did they ever not forget things? Who was the first man to forget his wedding anniversary?

Darwinian selection, apparently attacks ideas themselves: what to remember, what not to remember, which idea is stronger, which idea is weaker.

Like Copernicus had a stronger idea than Galen? That was some time ago. Like gothic ribbed vaults being a stronger idea (& effectively replacing) Norman barrel vaults, ca 1100? People have made an effort to remember certain things ever since there have been certain things to remember.

Gerd Gigerenzer, to whom I talked and whom I find a fascinating thinker, notes that thinking itself somehow leaves the brain and uses a platform outside of the human body.

Fascinating & complex things can & do go on outside the human body and (for that matter) in it; but if they aren't conscious, they aren't thinking. Why isn't natural selection an example of thinking (nature mulling things over) outside the human body? You can say, sure, natural selection is thinking; but then you've merely defined thinking out of existence. A thunderstorm is a very complex & highly organized activity. Does it show that the atmosphere is thinking?

We have a crisis of all the systems that somehow are linked to either thinking or to knowledge.

No we don't. There's no crisis in cable TV. There's no crisis in film-making. There was a crisis in painting, but it's sorting itself out. There's no crisis in architecture — architecture is thriving. What's the basis for this assertion? Except that Schirrmacher is European?


JOHN BROCKMAN
Publisher & Editor, Edge; Author, By The Late John Brockman; The Third Culture

At a dinner in the mid-sixties, the composer John Cage handed me a copy of Norbert Wiener's book, Cybernetics. He was talking about "the mind we all share" in the context of "the cybernetic idea". He was not talking Teilhard de Chardin, the Noosphere, or any kind of metaphysics.

The cybernetic idea was built from Turing's Universal Machine in the late thirties; Norbert Wiener's work during World War II on automatic aiming and firing of anti-aircraft guns; John von Neumann's theory of automata and its applications (mid-forties); Claude Shannon's landmark paper founding information theory in 1948.

What exactly is "the cybernetic idea"? Well, it's not to be confused with the discipline of cybernetics, which hit a wall, and stopped evolving during the 1950s. And it's not your usual kind of idea. The cybernetic idea is an invention. A very big invention. The late evolutionary biologist Gregory Bateson called it the most important idea since the idea of Jesus Christ.

The most important inventions involve the grasping of a conceptual whole, a set of relationships which had not been previously recognized. This necessarily involves a backward look. We don't notice it. An example of this is the "invention" of talking. Humans did not notice that they were talking until the day someone said, "We're talking." No doubt the first person to utter such words was considered crazy. But that moment was the invention of talking, the recognition of pattern which, once perceived, had always been there.

So how does this fit in with the cybernetic idea?

It's the recognition that reality itself is communicable. It's the perception that the nonlinear extension of the brain's experience — the socialization of mind — is a process that involves the transmission of neural pattern — electrical, not mental — that's part of a system of communication and control that functions without individual awareness or consent.

This cybernetic explanation tears the apart the fabric of our habitual thinking. Subject and object fuse. The individual self decreates. It is a world of pattern, of order, of resonances. It's an undone world of language, communication, and pattern. By understanding that the experience of the brain is continually communicated through the process of information, we can now recognize the extensions of man as communication, not as a means for the flow of communication. As such they provide the information for the continual process of neural coding.

How is this playing out in terms of the scenarios presented by Frank Schirrmacher in his comments about the effect of the Internet on our neural processes? Here are some random thoughts inspired by the piece and the discussion:

Danny Hillis once said that "the web is the slime mold of the Internet. In the long run, the Internet will arrive at a much richer infrastructure, in which ideas can potentially evolve outside of human minds. You can imagine something happening on the Internet along evolutionary lines, as in the simulations I run on my parallel computers. It already happens in trivial ways, with viruses, but that's just the beginning. I can imagine nontrivial forms of organization evolving on the Internet. Ideas could evolve on the Internet that are much too complicated to hold in any human mind." He suggested that "new forms of organization that go beyond humans may be evolving. In the short term, forms of human organization are enabled."

Schirrmacher reports on Gerd Gigerenzer's idea that "thinking itself somehow leaves the brain and uses a platform outside of the human body. And that's the Internet and it's the cloud. And very soon we will have the brain in the cloud. And this raises the question of the importance of thoughts. For centuries, what was important for me was decided in my brain. But now, apparently, it will be decided somewhere else."

John Bargh notes that research on the prediction and control of human judgment and behavior, has become democratized. "This has indeed produced (and is still producing) an explosion of knowledge of the IF-THEN contingencies of human responses to the physical and social environment … we are so rapidly building a database or atlas of unconscious influences and effects that could well be exploited by ever-faster computing devices, as the knowledge is accumulating at an exponential rate." The import of Bargh's thinking is that the mere existence of a social network becomes an unconscious influence on human judgment and behavior.

George Dyson traces how numbers have changed from representing things, to meaning things, to doing things. He points out that the very activity involved in the socialization of mind means that "we have network processes (including human collaboration) that might actually be ideas."

What does all this add up to?

Schirrmacher is correct when when he points out that in this digital age we are going through a fundamental change which includes how our brains function. But the presence or absence of free will is a trivial concern next to the big challenge confronting us: to recognize the radical nature of the changes that are occuring and to grasp an understanding of the process as our empirical advances blow apart our epistemological bases for thinking about who and what we are. "We're talking."


ANNALENA MCAFEE
Writer, Journalist; Former Editor of The Guardian's literary supplement, the Guardian Review


Is human nature, or at least fundamental human behaviour, being changed by digital technology? I'd like to see some good empirical evidence before hazarding a conclusion — evidence identifying a growth in attention deficit disorders which links this to an exponential increase in computer use, studies citing escalating incidences of memory loss among the young, comparative brain scans of multi taskers and single taskers with their cerebral hemispheres lit up like Christmas trees. Perhaps we can safely attribute an increase in cases of carpal tunnel syndrome to widespread computer use, but while this might give a fashionable cachet to wrist splints among the young, it won't change human behaviour and is no cause for moral panic. The same wrist injury can be caused by playing the ukelele, or indeed any musical instrument, or by gripping your quill pen too tightly — writer's cramp — but we have never worried that the young are spending too much time practising their scales or writing sonnets.

The editor of the Wall Street Journal is reported to have accused Google of "encouraging promiscuity", which, if true, is a serious charge. His complaint, however, refers not to any advocacy of lax sexual behaviour but to Google News, whose daily aggregate of enticing headlines from around the world discourages old-style loyalty among readers. Once we plighted our troth for life, for better or worse, circulation rise or slump, to one newspaper delivered regularly to our door or bought at newsagents. Now, dazzled by the daily digital passeggiata, we've turned our backs on the stale pleasures of familiarity and spend heady hours communing with whichever passing news providers take our fancy. All for free. We barely pause to ask their names. Monetary issues apart, is this a bad thing? Or even new? Isn't it about choice? Survival of the slickest. I believe they call it the market. Once Rupert Murdoch has worked out a way of actually charging these fickle punters for his services, surely everyone will be happy and it will be business as usual.

So in the absence of hard evidence about the baleful effects of the Internet, one has to resort to anecdote and ill-informed personal observation. These are mine. Young people are on the whole nicer in the digital age. They are happier to spend time with, or at least tolerate the company of, adults than was my own generation of posturing post-soixante-huitards. The digital generation values friendship and understands reciprocity more than its earlier counterparts and is more emotionally insightful and expressive, qualities which I speculate may be enhanced by social networking and texting. These young people may not necessarily be more literate or informed but, unlike previous generations, they know exactly where to go when information is required.

Yes, the personal blogs and social network sites have unleashed an embarrassing pandemic of exhibitionism. But, and here is the liberating thing, we don't have to read it. Look away. And this compulsive urge to tell all in blush-makingly dull detail is no departure in terms of human behaviour. People have been writing painstaking accounts of their inconsequential lives since they first learned the alphabet and got hold of cheap stationery. For every Samuel Pepys, James Boswell or Virginia Woolf there were legions of tedious self-regarding monologuists, showing off to an imagined posterity.

As to predictive programs and I-Tunes "Genius", don't they serve a useful and very old-fashioned purpose? Like your best friend, or the cheerful proprietor of the neighbourhood bookstore, or the dreadlocked manager of your favourite record shop, they know what you like and will helpfully alert you when that artist/writer or, someone operating in a similar field, is about to produce.
"You'll love the latest Alice Munro. And if you like Devendra Banhart, check out Vetiver." But unlike your best friend, or the long-vanished bookstore owner, or the former manager of the defunct record shop — all of whom made a number of unintentionally insulting errors of taste — these predictive programs get it right 90 per cent of the time. I am willing to trade my free will – surely already compromised by my birthplace, my parents' religion and circumstances, my genetic inheritance — for these time-saving and life-enriching programs.

I'm alarmed, though, at the prospect that I might have to understand exactly how they work; I cannot imagine buckling down to reviews of "the structures of software", alongside critiques of the new biographies of John Cheever or Ayn Rand. I had always assumed that one of the pleasures of civilization was being relieved of our ancestors' obligation to know how things work. I open my fridge in the morning and, while I am aware that it achieves its effect with compressed gas and electricity, I could not tell you how it works, only that it is pleasingly cold. I reach for a pint of milk, untroubled by my ignorance of modern milking techniques, and my landline rings, by a process which, for all I know, might as well be sorcery.

There are many things I do not know and do not care to know, and I am sure refrigerator engineers, cattle farmers and telephone operatives would have no desire to acquaint themselves with my own small area of expertise. Granted, I find computers more intrinsically interesting than fridges or phones or cows and I did spend some time learning MS-DOS and html code. But in our fast-moving age, these skills are now as useful to me as fluency in Greenlandic Norse or Crimean Gothic. I no longer have the time or patience to find out how it works; just show me what it does.

There is an anxiety that we're all like fat frat boys gorging ourselves at the free, 24-hour, all-you-can-read information buffet. Even here, though, in bouts of online gluttony, we display timeless human traits: the urge to binge in times of plenty, feasting till we're queasy on roasted mammoth, since instinct tells us there might be nothing to eat out there again for a month or two. But our systems are their own regulators. We can only take so much. After a while we long for a simple glass of water, an indigestion pill and wholesome human pleasures, which may or may not involve a book (electronic or paper), music (ipod or live), sport, landscape, love. And as one of your correspondents writes, the young — for whom digital innovation is an unremarkable fact of life — are better at handling the screen-life balance than their seniors, who are too often awestruck by innovation and waylaid by serendipity. The young take for granted today's surfeit of mammoths and they moderate their intakes accordingly.

I wonder about the suggestion that ideas are now facing some kind of Darwinian Day of Judgment, where perfectly sound, original and useful notions will be consigned to the Flames of Oblivion simply because they've run out of room upstairs in the Heavenly Hall of Fame. Surely it was ever thus. There has always been too much information — far more than a single human brain could handle — for the curious and literate with access to a library. Now everyone can have a go. We are all innocents at large in this pleasuredome, agog in Alexandria.

To baulk at this ease of access, to pathologise online abuse and ignorance, is to behave like a medieval monk, horrified by the dawn of secular literacy and fearful that his illuminated manuscripts will fall into unclean hands. There will always be more penny dreadfuls than priceless masterworks; there is only one Book of Kells but there are many, many Da Vinci Codes. And there is room for them all. No shortage of shelf space here. No shortage of readers, either.

You might be frowning into your screen over findings on the absence of retroviral restriction in cells of the domestic cat, he might be surfing Google News for updates on Britney's latest tour, she might be assessing commercial possibilities of hydrogen-producing algae, while they browse the UFO sites, book a cheap flight, and check the last stanza of a John Donne sonnet. It's all there for the taking. How you use it is the point. And who knows? The Britney fan, who might never have strayed into a library in his life, could one day find himself momentarily sidetracked by a website about the New Zealand poet Charles Edgar Spear, click through a hyperlink to TS Eliot and develop a passion for modernist verse. Or, more likely, drawn by a wittily-worded account of Britney's current Australian tour in one of Mr Murdoch's publications, he might eschew his libertine ways, shell out for an online subscription to the newspaper, and settle down to a life of blameless monogamy.


GEORGE DYSON
Science Historian; Author, Darwin Among the Machines

In the beginning, numbers represented things. Digital encoding then gave numbers the power of meaning things. Finally, the order codes were unleashed, and numbers began doing things.

The cultural and intellectual transitions described by Schirrmacher and his commentators are higher-level manifestations of this. First we had digital representations of existing ideas. Then we had digital expressions of new, previously unrepresented ideas. And now we have network processes (including human collaboration) that might actually be ideas.

Is free will ours to lose?


JOHN BARGH
Social Psychologist, Yale University; Director. the ACME (Automaticity in Cognition, Motivation and Evaluation) Lab

I tend to worry less about information overload at the personal, individual level and more about it at the societal and governmental level. The human brain is long used to being overloaded with sensory information, throwing most input away in the first half-second after sensing it; we are constantly bombarded by 'primes' or implicit suggestions as to what to think, feel, and do — yet we manage usually to stably do one thing at a time.  The brain is used to dealing with conflicting messages too, and managing and integrating the activity of so many physiological and nervous subsystems — but as the work of Ezequiel Morsella is showing, keeping all of that management out of conscious view so we never experience it.

We are already and have long been multitaskers, in other words, we just do it (so well) unconsciously, not consciously.  It is conscious multitasking (talking on the phone while driving) that we are so bad at because of the limits of conscious attention, but multitasking per se — we are built for that. As we gain skills those skills require less and less of that conscious attention so that an expert such as Michael Jordan, or today, Kobe or Lebron, can consciously plot his strategy for weaving through a maze of defenders down the court because his limited conscious attention is no longer needed for dribbling, body movements, head fakes, and so on.  Driving a car requires incredible multitasking at first but is soon much less difficult because the multitasking 'moves downstairs' and out of the main office, over time.

But Schirrmacher is quite right to worry about the consequences of a universally available digitized knowledge base, especially if it concerns predicting what people will do.  And most especially if artificial intelligence agents can begin to search and put together the burgeoning data base about what situation (or prime) X will cause a person to do. The discovery of the pervasiveness of situational priming influences for all of the higher mental processes in humans does say something fundamentally new about human nature (for example, how tightly tied and responsive is our functioning to our particular physical and social surroundings).  It removes consciousness or free will as the bottleneck that exclusively generates choices and behavioral impulses, replacing it with the physical and social world itself as the source of these impulses.

But the discovery that people are actually rather easy to influence and predict (once we know the triggering environmental cues or prompts) is fact is today being exploited as a research tool because we know now that we can activate and study complex human psychological systems with very easy priming manipulations.  A quarter century ago the methods to activate (to then study) aggressive or cooperative tendencies were more expensive and difficult, involving elaborate deceptions, confederates, and staged theatrics.  It is said that the early cognitive dissonance theorists such as Eliot Aronson used to routinely have their graduate students take theater classes.  And other social psychologists of that generation, such as Richard Nisbett, have publicly complained (in a good-natured way) about 'rinky-dink' priming manipulations that somehow produce such strong effects. (This reminds me of Kahneman and Tversky's representativeness heuristic; here the belief that complex human outputs must require complex causes.)

It is because priming studies are so relatively easy to perform that this method has opened up research on the prediction and control of human judgment and behavior, 'democratized' it, basically, because studies can be done much more quickly and efficiently, and done well even by relatively untrained undergraduate and graduate students.  This has indeed produced (and is still producing) an explosion of knowledge of the IF-THEN contingencies of human responses to the physical and social environment.  And so I do worry with Schirrmacher on this score, because we so rapidly building a database or atlas of unconscious influences and effects that could well be exploited by ever-faster computing devices, as the knowledge is accumulating at an exponential rate.

More frightening to me still is Schirrmacher's postulated intelligent artificial agents who can, as in the Google Books example, search and access this knowledge base so quickly, and then integrate it to be used in real-time applications to manipulate the target individual to think or feel or behave in ways that suit the agent's (or its owner's) agenda of purposes. (Of course this is already being done in a crude way through advertising, both commercial and political; we have just shown for example that television snack food ads increase automatic consumption behavior in the viewer by nearly 50%, in children and adults alike.)


STEVEN PINKER
Harvard College Professor and Johnstone Family Professor of Psychology, Harvard University; Author, The Stuff of Thought


You're at a dinner in a restaurant, and various things come up in conversation — who starred in a movie, who was president when some event happened, what some religious denomination believes, what the exact wording is of a dimly remembered quotation. Just as likely as not, people around the table will pull out their iPhones, their Blackberries, their Androids, and search for the answer. The instant verification not only eases the frustration of the countless tip-of-the-tongue states that bog down a conversation, but offers a sobering lesson on how mistaken most of us are most of the time.

You'll be amazed at the number of things you remember that never happened, at the number of facts you were certain of that are plainly false. Everyday conversation, even among educated people, is largely grounded in urban legends and misremembered half-truths. It makes you wonder about the soundness of conventional wisdom and democratic decision-making — and whether the increasing availability of fact-checking on demand might improve them. 

I mention this because so many discussions of the effects of new information technologies take the status quo as self-evidently good and bemoan how intellectual standards are being corroded (the "google-makes-us-stoopid" mindset). They fall into the tradition of other technologically driven moral panics of the past two centuries, like the fears that the telephone, the telegraph, the typewriter, the postcard, radio, and so on, would spell the end of civilized society.

Other commentaries are nonjudgmentally fatalistic, and assume that we're powerless to evaluate or steer the effects of those technologies — that the Internet has a mind and a will of its own that's supplanting the human counterparts. But you don't have to believe in "free will" in the sense of an immaterial soul to believe in "free will" in the sense of a goal-directed, intermittently unified, knowledge-sensitive decision-making system. Natural selection has wired that functionality into the human prefrontal cortex, and as long as the Internet is a decentralized network, any analogies to human intentionality are going to be superficial.

Frank Schirrrmacher's reflections thankfully avoid both extremes, and I would suggest another way to look at the effects of technology on our collective intelligence. Take the intellectual values that are timeless and indisputable: objectivity, truth, factual discovery, soundness of argument, insight, explanatory depth, openness to challenging ideas, scrutiny of received dogma, overturning of myth and superstition. Now ask, are new technologies enhancing or undermining those values? And as you answer, take care to judge the old and new eras objectively, rather than giving a free pass to whatever you got used to when you were in your 20s.

One way to attain this objectivity is to run the clock backwards and imagine that old technologies are new and vice-versa. Suppose someone announced: "Here is a development that will replace the way you've been doing things. From now on, you won't be able to use Wikipedia. Instead you'll use an invention called The Encyclopedia Britannica. You pay several thousand dollars for a shelf-groaning collection of hard copies whose articles are restricted to academic topics, commissioned by a small committee, written by a single author, searchable only by their titles, and never change until you throw the entire set and buy new ones." Would anyone argue that this scenario would make us collectively smarter?

If social critics started to scrutinize the immediate past and obsolescing present and not just the impending future, our understanding of the effects of technology on intellectual quality would be very different. The fact is that most of our longstanding, prestigious informational institutions are, despite their pretentions, systematically counter-intellectual. In the spirit of the technophobe screeds, let me describe them in blunt, indeed hyperbolic terms.

Many of the articles in printed encyclopedias stink — they are incomprehensible, incoherent, and instantly obsolete. The vaunted length of the news articles in our daily papers is generally plumped out by filler that is worse than useless: personal-interest anecdotes, commentary by ignoramuses, pointless interviews with bystanders ("My serial killer neighbor was always polite and quiet"). Precious real-estate in op-ed pages is franchised to a handful of pundits who repeatedly pound their agenda or indulge in innumerate riffing (such as interpreting a "trend" consisting of a single observation). The concept of "science" in many traditional literary-cultural-intellectual magazines (when they are not openly contemptuous of it) is personal reflections by belletristic doctors. And the policy that a serious book should be evaluated in a publication of record by a single reviewer (with idiosyncratic agendas, hobbyhorses, jealousies, tastes, and blind spots) would be risible if we hadn't grown up with it.

For all their flaws, media such as Wikipedia, news feeds, blogs, website aggregators, and reader reviews offer the potential for great advances over the status quo — not just in convenience but in intellectual desiderata like breadth, rigor, diversity of viewpoints, and responsibility to the factual record. Our intellectual culture today reflects this advance — contrary to the Cassandras, scientific progress is dizzying; serious commentary on the Internet exceeds the capacity of any mortal reader; the flow of philosophical, historical, and literary books (many of doorstop length) has not ebbed; and there is probably more fact-checking, from TV news to dinner tables, than an any time in history. Our collective challenge in dealing with the Internet is to nurture these kinds of progress. 


JOHN PERRY BARLOW
Co-founder , Co-Chair, Electronic Frontier Foundation; Cyberspace pioneer ("The Jefferson of the Internet")

I am the very definition of fiercely mixed feelings on this subject.

I have always wanted to convey to every human being the Right to Know — the protected technical means to fulfill all curiosities with the best answers human beings had yet derived — but the Ability to Know (Everything) is a capacity we don't and won't possess individually.

Even as we can drill deeper into the collectively-known, our ability to know the collective becomes more superficial.

More than ever, we have to trust the formation of Collective Consciousness, the real Ecosystem of Mind.


GERD GIGERENZER
Psychologist; Director of the Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development, Berlin; Author, Gut Feelings: The Intelligence of the Unconscious

Technology and Mentality

Frank Schirrmacher asks, does new technology change human cognition and behavior, and if so, how? This question is a true wake-up question, but its answer is far from obvious. The technophobe might conjecture that new technologies grow smarter while humans grow dumber, like my bank accountant yesterday, who could not calculate 20% of 500 euros without a pocket calculator. The technophile would respond that everything simply gets better, just as eyesight improves with glasses and friendship becomes easier with Facebook.

But there is a more interesting answer: the dynamic symbiosis of technology and mentality. A symbiosis is to the mutual benefit of two different species but requires mutual adaptation. Consider the invention that has changed human mental life more than anything else, writing and, subsequently, the printing press. Writing made analysis possible: One can compare texts, which is difficult in an oral tradition.

Writing also made exactitude possible, as in higher-order arithmetic; without a written form, these mental skills quickly reach their limits. But writing makes long-term memory less important than it once was, and schools have largely replaced the art of memorization with training in reading and writing. So it's neither loss nor gain, but both. And this means new adaptations between mentality and technology. In turn, new abilities create new tools that support new abilities, and so the spiral evolves.

The computer is another instance. The invention of the computer has been described as the third information revolution, after the advent of writing and the printing press. As early as the 1960s, electrical engineer Doug Engelbart had designed the first interactive computer tools, including the mouse, on-screen editing, screen windows, hypertext, and electronic mail. However, at this time, human-computer interaction still seemed science fiction; computers were for processing punched cards, not for interacting with humans. The impact computers had on society and science was difficult to imagine, and it went in both directions: computers and humans coevolve.

The first computer was a group of human beings: the large-scale division of labor, as evidenced in the English machine-tool factories and in the French government's manufacturing of logarithmic and trigonometric tables for the new decimal system in the 1790s.

Inspired by Adam Smith's praise of the division of labor, French engineer Prony organized the project in a hierarchy of tasks. At the top were a handful of first-rank mathematicians who devised the formulas; in the middle, seven or eight persons trained in analysis; and at the bottom, 70 or 80 unskilled persons who performed millions of additions and subtractions. Once it was shown that elaborate calculations could be carried out by an assemblage of unskilled workers rather than by a genius such as Gauss, each knowing very little about the larger computation, Charles Babbage was able to conceive of replacing these workers with machinery.

Babbage, an enthusiastic "factory tourist," explicitly referred to this division of mental labor as the inspiration for his mechanical computer, using terms from the textile industry, such as 'mill' and 'store' to describe its parts. Similarly, he borrowed the use of punched cards from the Jacquard loom, the programmable weaving machines that used removable cards to weave different patterns. Thus, initially there was a new social system of work, and the computer was created in its image.

Through dramatic improvements in hardware and speed, the computer became the basis for a fresh understanding of the human mind. Herbert Simon and Allan Newell proposed that human thought and problem solving were to be understood as a hierarchical organization of processes, with subroutines, stores, and intermediate goal states that decomposed a complex problem into simple tasks.

In fact, a social system rather than a computer performed the trial run for the Logic Theorist, their first computer program. Simon's wife, children, and graduate students were assembled in a room, and each of them became a subroutine of the program, handling and storing information. This was the same the Manhattan project, where calculations were done by an unskilled workforce of mostly women, at low pay.

Similarly, Marvin Minsky, one of the founders of artificial intelligence, regarded the mind as a society of dumb agents, collectively creating true intelligence. Similarly, anthropologists have begun to use computer analogies to understand how social groups make decisions "in the wild," such as how the crew on a large ship solves the problem of navigation by storing, processing, and exchanging information. The direction of the analogy thus eventually became reversed: Originally, the computer was modeled on a new social system of work; now social systems of work are modeled on the computer.

We might think of mentality and technology as two sides of the same coin, as a system in which knowledge, skills, and values are distributed. This requires a new type of psychology that goes beyond the individual and studies the dynamics of human adaptation to the very tools humans create.


VIRGNIA HEFFERNAN
Columinist ("The Medium"), The New York Times

The metaphor that seems most alive to me in Frank Schirrmacher's disquisition is one of eating. On the one hand, the title of the interview — "The Age of the Informavore" — suggests a model of man as an eater of information. On the other, Schirrmacher speaks provocatively of information that battens on human attention (and dies when starved of it); of information, in other words, that eats us. This two-way model of consumption in the Internet age — we consume information, information consumes us — ought to be kept before us, lest we repress it and be made anxious that way.

Because — right? — there is a great deal of anxiety, irritation, unease and impatience in Internet use. There is even some self-loathing. What am I doing on the Web—when I used to read books bound in Moroccan leather; stroll in the sunshine; spend hours in focused contemplation of Hegel or Coleridge?

If the Internet is a massive work of art, as I believe it is, it has modernist properties: it regularly promotes a feeling of unease and inadequacy (rather than jubilation, satisfaction, smugness, serenity, etc). As Schirrmacher's interview suggests, perhaps this is because the Internet user feels as though he is forever trying to eat or be eaten, and he's both undernourished and afraid.

A critic of the Internet attuned to its aesthetic properties might ask: How does it generate this effect? I'm inclined to believe there's a long and fascinating answer to this question. I'm also inclined to believe that, in time, consumers and producers of the Internet — and we are all both at once — will find ways to leave off apocalyptic thinking and generate and savor the other sensory-emotional effects of the Web.


JESSE DYLAN
Film-Maker; Founder, free-form.tv; Lybba.org

How the human brain must adapt to the modern era and where those changes will take us are a mystery. What knowledge will a person need in the future when information is ubiquitous and all around us? Will Predictive technologies do away with free will. Google will be able to predict wether you are enjoying the Neil Young concert you are attending before you yourself know. Science fiction becomes reality.

Schirrmacher speaks about Kafka and Shakespeare reflecting the societies they lived in and the importance of artists to translate the computer age.

This lecture is a warning to us to be aware of the forces that shape us. The pace of change in new technologies is so rapid it makes me wonder wether it's already too late.


DOUGLAS RUSHKOFF
Media Analyst; Documentary Writer; Author, Life, Inc.


These are refreshingly disturbing reflections on the digital, from the mind of a caring individual who would hate to see human cognition overrun before its time. As one who once extolled the virtues of the digital to the uninitiated, I can't help but look back and wonder if we adopted certain systems too rapidly and unthinkingly. Or even irreversibly.

But I suspect Schirrmacher and most of us cheering for humanity also get unsettled a bit too easily — drawn into obsessing over the disconnecting possibilities of technology, and making us no better than an equal and opposite force to techno-libertarians celebrating the Darwinian wisdom of hive economics. Both extremes of thought and prediction are a symptom of thinking too little rather than too much about all this.

This is why Schirrmacher's thinking is, at its heart, a call to do more thinking — the kind of real reflection that happens inside and among human brains relating to one another in small groups, however elitist that may sound to the technomob. ("Any small group will do" is the answer to their objections, of course. Freedom means freedom to choose your fellow conversants, and not everything needs to be posted for the entire world with "comments on" and "copyright off'".)

It's the inability to draw these boundaries and distinctions — or the political incorrectness of suggesting the possibility — that paints us into corners, and prevents meaningful discussion. And I believe it's this meaning we are most in danger of losing.

I would argue we humans are not informavores at all, but rather consumers of meaning. My computer can digest and parse more information than I ever will, but I dare it to contend with the meaning. Meaning is not trivial, even though we have not yet found metrics capable of representing it. This does not mean it does not exist, or shouldn't.

Faced with a networked future that seems to favor the distracted over the focused and the automatic over the considered, it's no wonder we should want to press the pause button and ask what this all means to the future of our species. And while the questions this inquiry raises may be similar in shape to those facing humans passing through other great technological shifts, I think they are fundamentally different this time around.

For instance, the unease pondering what it might mean to have some of our thinking done out of body by an external device is in some ways just a computer-era version of the challenges to "proprioception" posed by industrial machinery. Where does my body or hand really end? becomes "what are the boundaries of my cognition?"

But while machines replaced and usurped the value of human labor, computers do more than usurp the value of human thought. They not only copy our intellectual processes — our repeatable programs — but they discourage our more complex processes — our higher order cognition, contemplation, innovation, and meaning making that should be the reward of "outsourcing" our arithmetics to silicon chips.

The way to get on top of all this, of course, would be to have some inkling of how these "thinking" devices were programmed — or even to have some input into the way they do so. Unlike our calculators, we don't even know what we are asking our machines to do, much less how they are going to go about doing it. Every Google search is — at least for most of us — a Hail Mary pass into the datasphere, requesting something from an opaque black box.

So we continue to build and accept new technologies into our lives with little or no understanding of how these devices have been programmed. We do not know how to program our computers. We spend much more time and energy trying to figure out how to program one another, instead. And this is potentially a grave mistake.


NICHOLAS CARR
Author, Does IT Matter?; The Big Switch

The digital computer, Alan Turing told us, is a universal machine. We are now learning that, because all types of information can be translated into binary code and computed, it is also a universal medium. Convenient, cheap, and ubiquitous, the great shared computer that is the Internet is rapidly absorbing all our other media. It's like a great sponge, sucking up books, newspapers, magazines, TV and radio shows, movies, letters, telephone calls, even face-to-face conversations. With Google Wave, the words typed by your disembodied correspondent appear on your screen as they're typed, in real time.

As Frank Schirrmacher eloquently and searchingly explains, this is the new environment in which our brains exist, and of course our brains are adapting to that environment — just as, earlier, they adapted to the environment of the alphabet and the environment of print. As the Net lavishes us with more data than our minds can handle, Schirrmacher suggests, we will experience a new kind of natural selection of information and ideas, even at the most intimate, everyday level: "what is important, what is not important, what is important to know?" We may not pause to ask those questions, but we are answering them all the time.

I expect, as well, that this kind of competition, playing out in overtaxed, multitasking, perpetually distracted brains, will alter the very forms of information, and of media, that come to dominate and shape culture. Thoughts and ideas will need to be compressed if they're to survive in the new environment. Ambiguity and complexity, expansiveness of argument and narrative, will be winnowed out. We may find ourselves in the age of intellectual bittiness, which would certainly suit the computers we rely on. The metaphor of brain-as-computer becomes a self-fulfilling prophecy: To keep up with our computers, we have to think like our computers.

"Importance is individualism," says Nick Bilton, reassuringly. We'll create and consume whatever information makes us happy, fulfills us, and leave the rest by the wayside. Maybe. Or maybe we'll school like fish in the Web's algorithmic currents, little Nemos, each of us convinced we're going our own way because, well, we never stop talking, never stop sharing the minutiae of our lives and thoughts. Look at me! Am I not an individual? Even if Bilton is correct, another question needs to be asked: does the individualism promoted by the Net's unique mode of information dispersal deepen and expand the self or leave it shallower and narrower? We've been online for twenty years. What have we accomplished, in artistic, literary, cultural terms? Yes, as Schirrmacher points out, we have "catharsis" — but to what end?

Resistance is not futile, says Jaron Lanier. That's certainly true for each of us as individuals. I'm not so sure it's true for all of us as a society. If we're turning into informavores, it's probably because we want to.


NICK BILTON
Adjunct Professor, NYU/ITP; Design Integration Editor, The New York Times


I am utterly perplexed by intelligent and innovative thinkers who believe a connected world is a negative one. How can we lambast new technology, transition and innovation? It's completely beyond my comprehension.

It is not our fear of information overload that stalls our egos, it's the fear that we might be missing something. Seeing the spread of social applications online over the past few years I can definitively point to one clear post-internet generational divide.

The new generation, born connected, does not feel the need to consume all the information available at their fingertips. They consume what they want and then affect or change it, they add to it or negate it, they share it and then swiftly move along the path. They rely on their community, their swarm, to filter and share information and in turn they do the same; it's a communism of content. True ideology at it's best. They, or should I say I, feel the same comfort from a pack of informavores rummaging together through the ever-growing pile of information while the analog generation still feels towards an edited newspaper or the neatly packaged one-hour nightly news show.

Frank Schirrmacher asks the question "what is important, what is not important, what is important to know?" The answer is clear and for the first time in our existence the internet and technology will allow it: importance is individualism. What is important to me is not important to you, and vice-a-versa. And individualism is the epitome of free will. Free will is not a prediction engine, it's not an algorithm on Google or Amazon, it's the ability to share your thoughts and your stories with whomever wants to consume them, and in turn for you to consume theirs. What is import is our ability to discuss and present our views and listen to thoughts of others.

Every moment of our day revolves around the idea of telling stories. So why should a select group of people in the world be the only ones with a soapbox or the keys to the printing press to tell their stories? Let everyone share their information, build their communities, and contribute to the conversation. I truly believe that most in society have only talked about Britney Spears and Ashton Kutcher because they were only spoken to in the past, not listened to. Not allowed to a part of the conversation. Of course they threw their hands in their air and walked away. Now they are finally coming back to the discussion.

As someone born on the cusp of the digital transition, I can see both sides of the argument but I can definitively assure you that tomorrow is much better than yesterday. I am always on, always connected, always augmenting every single moment of my analog life and yet I am still capable of thinking or contemplating any number of existential questions. My brain works a little differently and the next generation's brains will work a little differently still. We shouldn't assume this is a bad thing. I for one hold a tremendous amount of excitement and optimism about how we will create and consume in the future. It's just the natural evolution of storytelling and information.


JARON LANIER
Musician, Computer Scientist; Pioneer of Virtural Reality

It is urgent to find a way to express a softer, warmer form of digital modernity than the dominant one Schirrmacher correctly perceives and vividly portrays.  The Internet was made up by people and stuffed with information by people, and there is no more information in it than was put in it.  That information has no meaning, or existence as information in the vernacular sense, except as it can be understood by an individual someday.  If Free Will is an illusion, then the Internet is doubly an illusion.

To continue to perceive almost supernatural powers in the Internet (an ascendant perception, as Schirrmacher accurately reports) is to cede the future to reactive religious fanatics.  Here is why:

The ideas Schirrmacher distills include the notion that free will is an illusion, while the Internet is driven by powers that are beyond any of us; essentially that we don't have free will but the Internet does.  If the message of modernity is "people don't exist, but computers do,"  then expect modernity to be rejected by most people.  Those who currently like this formulation are the ones who think they will be the beneficiaries of it- the geeky, technical, educated elite.  But they are kidding themselves.  

Partisan passions and the "open" anonymous vision of the Internet promoted by the Pirates are so complementary, it's as if they were invented for each other.  The Pirates will only thrive briefly before they have super-empowered more fanatical groups.  

If the new world brought about by digital technologies is to enhance Darwinian effects in human affairs, then digital culture will devour itself, becoming an ouroboros that will tighten into a black hole and evaporate. Unless, that is, the Pirates can become immortal through technology before it is too late, before their numbers are overtaken, for instance, by the high birth rates of retro religious fanatics everywhere.  This race for immortality is not so hidden in the literature of digital culture.  The digital culture expressed by the Pirates is simultaneously nihilist and maniacal/egocentric.

My one plea to Schirrmacher is to shed the tone of inevitability.  It is absolutely worth resisting the trend he identifies.


GEORGE DYSON
Science Historian; Author, Darwin Among the Machines

Nine years after his Wake Up Call for European Tech, issued just as the Informavores sat down to eat, Frank Schirrmacher is back, reminding us of the tendency to fall asleep after a heavy meal. All digital all the time may be too much of a good thing. Can we survive the deluge?

I see hope on the horizon. Analog computing! For real. The last we saw of analog computing, we were trying to get differential analyzers to solve problems that can be solved much more accurately, and much faster, digitally. Analog computing is as extinct as your grandfather's slide rule! Nonetheless, many things can be done better by analog computing than by digital computing, and analog is making a return.

Some of the most successful recent developments — Google, Facebook, Twitter, not to mention the Web as a whole — are effectively operating as large analog computers, although there remains a digital substrate underneath. They are solving difficult, ambiguous, real-world problems — Are you really my friend? What's important? What does your question mean — through analog computation, and getting better and better at it, adaptation (and tolerance for noise and ambiguity) being one of analog computing's strong suits.

When you are an informavore drowning in digital data, analog looks good.


DANIEL KAHNEMAN
Eugene Higgins Professor of Psychology, Princeton; Recipient, 2002 Nobel Prize in Economic Sciences


Very interesting interview, which is itself a nice example of what Schirrmacher is talking about: it should be read very quickly, to get a vague sense of unease, of possibilities, of permeable boundaries between self and others, between one's thoughts and those you get from others. You do get something out of it, and may find yourself thinking slightly differently because of it.

The interview vividly expresses the sense many of us are getting that when we are bathed in information (it is not really snippets of information, we need the metaphor of living in a liquid that is constantly changing in flavor and feel) we no longer know precisely what we have learned, nor do we know where our thoughts come from, or indeed whether the thoughts are our own or absorbed from the bath. The link with Bargh is also interesting, because John pushes the idea that we are driven from the outside and controlled by a multitude of cues of which we are only vaguely aware — we are bathing in primes.

Will all this change what it is like to be human? Will it change what consciousness is like? There must be people out there who study teenagers who have lived in this environment all their life, and they should be the one to tell us. The only teenagers I know well are my grandchildren, and that is not enough of a sample. They use computers a lot, but it has not made them very different. Of course they read much less, and they have a sense of how knowledge is organized that I can only envy — I keep being frustrated by how much better young people are at the task of searching.

Schirrmacher feels that the loss of the notion of free will may be dangerous, especially in Germany — I have a vague sense of what he is saying — perhaps this is a return to the old idea that psychoanalysis was causal in loosening the hold of morality. There really is a lot of stuff there.


John Brockman, Editor and Publisher
Russell Weinberger, Associate Publisher

Alexandra Zukerman, Assistant Editor
contact: [email protected]
Copyright © 2009 By Edge Foundation, Inc
All Rights Reserved.

|Top|