| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |

next >

Science Historian; Author, Darwin Among the Machines


In the North Pacific ocean, there were two approaches to boatbuilding. The Aleuts (and their kayak-building relatives) lived on barren, treeless islands and built their vessels by piecing together skeletal frameworks from fragments of beach-combed wood. The Tlingit (and their dugout canoe-building relatives) built their vessels by selecting entire trees out of the rainforest and removing wood until there was nothing left but a canoe.

The Aleut and the Tlingit achieved similar results — maximum boat / minimum material — by opposite means. The flood of information unleashed by the Internet has produced a similar cultural split. We used to be kayak builders, collecting all available fragments of information to assemble the framework that kept us afloat. Now, we have to learn to become dugout-canoe builders, discarding unneccessary information to reveal the shape of knowledge hidden within.

I was a hardened kayak builder, trained to collect every available stick. I resent having to learn the new skills. But those who don't will be left paddling logs, not canoes.

Vice President, Search Products & User Experience, Google


It's not what you know, it's what you can find out. The Internet has put at the forefront resourcefulness and critical-thinking and relegated memorization of rote facts to mental exercise or enjoyment. Because of the abundance of information and this new emphasis on resourcefulness, the Internet creates a sense that anything is knowable or findable — as long as you can construct the right search, find the right tool, or connect to the right people. The Internet empowers better decision-making and a more efficient use of time.

Simultaneously, it also leads to a sense of frustration when the information doesn't exist online. What do you mean that the store hours aren't anywhere? Why can't I see a particular page of this book? And, if not verbatim, no one has quoted it even in part? What do you mean that page isn't available? Page not found?

The Internet can facilitate an incredible persistence and availability of information, but given the Internet's adolescence, all of the information simply isn't there yet. I find that in some ways my mind has evolved to this new way of the thinking, relying on the information's existence and availability, so much so that it's almost impossible to conclude that the information isn't findable because it just isn't online.

The Web has also enabled amazing dynamic visualizations, where an ideal presentation of information is constructed — a table of comparisons or a data-enhanced map, for example. These visualizations — be it news from around the world displayed on a globe or a sortable table of airfares — can greatly enhance our understanding of the world or our sense of opportunity. We can understand in an instant what would have taken months to create just a few short years ago. Yet, the Internet's lack of structure means that it is not possible to construct these types of visualizations over any or all data. To achieve true automated, general understanding and visualization, we will need much better machine learning, entity extraction, and semantics capable of operating at vast scale.

On that note — and in terms of future Internet innovation, the important question may not be how the Internet is changing how we think but instead how the Internet is teaching itself to think.


Artist; Composer; Recording Producer: U2, Cold Play, Talking Heads, Paul Simon; Recording Artist


I notice that some radical social experiments which would have seemed Utopian to even the most idealistic anarchist 50 years ago are now working smoothly and without much fuss. Among these are open source development, shareware and freeware, Wikipedia, MoveOn, and UK Citizens Online Democracy.

I notice that the Net didn't free the world in quite the way we expected — repressive regimes can shut it down, and liberal ones can use it as a propaganda tool. On the upside, I notice that the variable trustworthiness of the Net has made people more sceptical about the information they get from all other media.

I notice that I now digest my knowledge as a patchwork drawn from a wider range of sources than I used to. I notice too that I am less inclined to look for joined-up finished narratives and more inclined to make my own collage from what I can find. I notice that I read books more cursorily — scanning them in the same way that I scan the Net — 'bookmarking' them.

I notice that the turn-of-the-century dream of Professor Darryl Macer to make a map of all the world's concepts is coming true autonomously — in the form of the Net.

I notice that I correspond with more people but at less depth. I notice that it is possible to have intimate relationships that exist only on the Net — that have little or no physical component. I notice that it is even possible to engage in complex social projects — such as making music — without ever meeting your collaborators. I am unconvinced of the value of these.

I notice that the idea of 'community' has changed — whereas that term used to connote some sort of physical and geographical connectedness between people, it can now mean 'the exercise of any shared interest'. I notice that I now belong to hundreds of communities — the community of people interested in active democracy, the community of people interested in synthesizers, in climate change, in Tommy Cooper jokes, in copyright law, in acapella singing, in loudspeakers, in pragmatist philosophy, in evolution theory, and so on.

I notice that the desire for community is sufficiently strong for millions of people to belong to entirely fictional communities such as Second Life and World of Warcraft. I worry that this may be at the expense of First Life.

I notice that more of my time is spent in words and language — because that is the currency of the Net — than it was before. My notebooks take longer to fill. I notice that I mourn the passing of the fax machine, a more personal communication tool than email because it allowed the use of drawing and handwriting. I notice that my mind has reset to being primarily linguistic rather than, for example, visual.

I notice that the idea of 'expert' has changed. An expert used to be 'somebody with access to special information'. Now, since so much information is equally available to everyone, the idea of 'expert' becomes 'somebody with a better way of interpreting'. Judgement has replaced access.

I notice that I have become a slave to connectedness — that I check my email several times a day, that I worry about the heap of unsolicited and unanswered mail in my inbox. I notice that I find it hard to get a whole morning of uninterrupted thinking. I notice that I am expected to answer emails immediately, and that it is difficult not to. I notice that as a result I am more impulsive.

I notice that I more often give money in response to appeals made on the Net. I notice that 'memes' can now spread like virulent infections through the vector of the Net, and that this isn't always good.

I notice that I sometimes sign petitions about things I don't really understand because it is easy. I assume that this kind of irresponsibility is widespread.

I notice that everything the Net displaces reappears somewhere else in a modified form. For example, musicians used to tour to promote their records, but, since records stopped making much money due to illegal downloads, they now make records to promote their tours. Bookstores with staff who know about books and record stores with staff who know about music are becoming more common.

I notice that, as the Net provides free or cheap versions of things, 'the authentic experience' — the singular experience enjoyed without mediation — becomes more valuable. I notice that more attention is given by creators to the aspects of their work that can't be duplicated. The 'authentic' has replaced the reproducible.

I notice that almost all of us haven't thought about the chaos that would ensue if the Net collapsed.

I notice that my daily life has been changed more by my mobile phone than by the Internet.

President, The Royal Society; Professor of Cosmology & Astrophysics; Master, Trinity College, University of Cambridge; Author, Our Final Century: The 50/50 Threat to Humanity's Survival


In 2002, three Indian mathematicians (Manindra Agrewal, and his two students Neeraj Kayal and Nitin Saxena) invented a faster algorithm for factoring large numbers — an advance that could be crucial for code-breaking. They posted their results on the Web. Such was the interest that within just a day, 20000 people had downloaded the work, which became the topic of hastily-convened discussions in many centres of mathematical research around the world.

This episode — offering instant global recognition to two young Indian students — offers a stark contrast with the struggles of a young Indian genius a hundred years ago. Srinivasa Ramanujan, a clerk in Bombay, mailed long screeds of of mathematical formulae to G H Hardy, a professor at Trinity College, Cambridge. Fortunately, Hardy had the percipience to recognise that Ramanujan was not the typical green-ink scribbler who finds numerical patterns in the bible or the pyramids, but that his writings betrayed preternatural insight. Hardy arranged for Ramanujan to come to Cambridge, and did all he could to foster his genius — sadly, however, culture shock and poor health led him to an early death.

The Internet enables far wider participation in front-line science; it levels the playing field between researchers in major centres and those in relative isolation, hitherto handicapped by inefficient communication. It has transformed the way science is communicated and debated. More fundamentally, it changes how research is done, what might be discovered, and how students learn.

And it  allows new styles of research. For example, in the old days, astronomical information, even if in principle publicly available, was stored on delicate photographic plates: these were not easily accessible, and tiresome to analyse. Now, such data (and, likewise, large datasets in genetics or particle physics) can be accessed and downloaded anywhere. Experiments, and natural events such as tropical storms or the impact of a comet on Jupiter, can be followed in real time by anyone who is interested. And the power of huge computing networks can be deployed on large data sets.

Indeed, scientific discoveries will increasingly be made by 'brute force' rather than by insight. IBM's 'Deep Blue' beat Kasparov not by thinking like him, but by exploiting its speed to explore a huge variety of options. There are some high-priority scientific quests — for instance, the recipe for a room-temperature superconductor, or the identification of key steps in the origin of life — which may yield most readily neither to insight nor to experiment, but to exhaustive computational searches.

Paul Ginsparg's arXiv.org archive transformed the literature of physics, establishing a new model for communication over the whole of science. Far fewer people today  read traditional journals. These have so far survived as guarantors of quality. But even this role may soon be trumped by a more informal system of quality control, signaled by the approbation of discerning readers (by analogy with the grading of restaurants by gastronomic critics), by blogs, or by Amazon-style reviews.

Clustering of experts in actual institutions will continue, for the same reason that  high-tech expertise congregates in Silicon Valley and elsewhere. But the actual progress of science will be driven by ever more immersive technology where propinquity is irrelevant. Traditional universities will survive insofar as they offer mentoring and personal contact to their students. But it's less clear that there will be a future for the 'mass university' where the students are offered little more than a passive role in lectures (generally of mediocre quality) with minimal feedback. Instead, the Internet will offer access to outstanding lectures — and in return will offer the star lecturers (and perhaps the best classroom teachers too) a potentially global reach.

And it's not just students, but those at the end of their career, whose lives the IInternet can transformatively enhance. We oldies, as we become less mobile, will be able to immerse ourselves — right up to until the final switch-off, or until we lose our wits completely — in an ever more sophisticated cyber-world allowing virtual travel and continuing engagement with the world

Editor, The Feuilleton (Arts and Essays), of the German Daily Newspaper, Sueddeutsche Zeitung, Munich


I think faster now. The Internet has somewhat freed me — of some of 20th century's burdens. The burden of commuting. The burden of coordinating communication. The burden of traditional literacy. I don't think the Internet would be of much use, if hadn't carried those burdens to excess all through my life. If speeding up thinking continually constitutes changing the way I think though, the Internet has done a marvelous job.

I wasn't an early adaptor, but the process started early. I didn't quite understand yet what would come upon us, when Marvin Minsky told me one afternoon in 1989 at MIT's Media Lab the most important trait of a computer wouldn't be it's power, but what it would be connected to. A couple of years later I stumbled upon the cyberpunk scene in San Francisco. People were popping smart drugs (which didn't do anything), Timothy Leary declared virtual reality the next psychedelics (which never panned out), Todd Rundgren warned of a coming overabundance of creative work without a parallel rise in great ideas (which is now reflected in the laments about the rise of the amateur). It was still the old underground running the new emerging culture. This new culture was driven by thought rather than art though. It's also where I met Cliff Figallo who ran a virtual community called The Well. He introduced me to John Perry Barlow who had just started a foundation called the Electronic Frontier Foundation. The name said it all. There was a new frontier.

It would still take me a few more years to grasp. One stifling evening in a rented apartment in downtown Dakar my photographer and me disassembled a phone line and a modem to circumvent some incompatible jacks and to get our laptop to dial up some node in Paris. It probably saved us a good week of research in the field. Now my thinking started to take on the speed I had sensed in Boston and San Francisco. Continually freeing me of the aforementioned burdens, it has allowed me to focus even more on the tasks expected of me as a journalist — find context, meaning and a way to communicate complex topics in the simplest of ways.

One important development that has allowed this to happen is that the possibly greatest of all traits the Internet has developed over the past few years is that it has become inherently boring. Gone are the adventurous days of using a pocket knife to log onto Paris from Africa. Even in remote place of this planet logging onto the Net means merely turning on your machine. This paradigm reigns all through the Web. Twitter is one of the simplest Internet applications ever developed. Still it has sped up my thinking in ever more ways. Facebook in itself is dull, but it has created new networks not possible before. Integrating all media into a blog has become so easy, grammar school kids can do it, so that freeform forum has become a great place to test out new possibilities. I don't think about the Internet anymore. I just use it.

All this might not constitute a change in thinking though. I haven't changed my mind or my convictions because of the Internet. I haven't had any epiphanies while sitting in front of a screen. The Internet so far has not given me no memorable experiences, although it might have helped to usher some along. It has always been people, places and experiences that have changed the way I think and provided me with a wide variety of memorable experiences.

Editor-in Chief, Nature


For better or worse, the Internet is changing when I think — night-time ideas can be instantly acted on. But much more importantly, the Internet has immeasurably supported my breadth of consideration and enhanced my speed of access to relevant stuff. Frustrations arise, above all, where these are constrained — and there's a rub.

We are in sight of technologies that can truly supersede paper, retaining the portability, convenience and format variety of that medium. Instant payment for added-value content will become easier and, indeed, will be taken for granted in many contexts.

But finding the stuff will remain a challenge. Brands, both publishers' and others', if deployed in a user-friendly way, will by their nature assist those seeking particular types of content. But content within established brands is far from an adequate representation of what matters, and that's why robust and inclusive indexing systems are so important.

I remain uneasy that biologists worldwide are so dependent on a literature-indexing system wholly funded by US tax-payers: PubMed. Nevertheless, it's extraordinarily valuable, and works in the interests not only of researchers but also publishers by making their work accessible without undermining their business models.

I emphasise that last point with good reason. One of the worst (ie self-defeatingly short-sighted) acts of 'my' industry occurred in the early 2000s. Congress, lobbied by publishers, and seemingly ignorant of the proven virtues of PubMed, rejected support for an equivalent search infrastructure PubSCIENCE, established by the US Department of Energy as an index for physical sciences and energy research. The lobbyists argued, wrong-headedly, that it competed with private sector databases. It was abandoned in 2002. Publishers have lost opportunities as a result, as has everyone else. Energy research, after all, has never been more urgent nor more in the US's and world's public interest.

PubMed imposes overly conservative restrictions on what it will index, but is a beacon nevertheless. Anyone in the natural sciences who, like me, has taken an active interest in the social sciences knows how hopelessly unfindable by comparison is that literature, distributed as it is amongst books, reports and unindexed journals. Google Scholar is in some ways valuable, providing access also to some "grey" literatures, but its algorithms are a law unto themselves and, in my experience, miss some of the literature. And so often the books and reports are themselves difficult to obtain.

There are foundations and other funders potentially more enlightened than Congress when it comes to supporting literature digitization and indexing. And universities are developing online repositories of their outputs, though with limited success.

Whatever works! Those wishing to promote the visibility and, dare one say, usefulness of their own work and of their disciplines should hotly pursue online availability of all types of substantive texts and, crucially, inclusive indexing.

Communications Expert; Author, Smart Mobs


Digital media and networks can only empower the people who learn how to use them — and pose dangers to those who don't know what they are doing. Yes, it's easy to drift into distraction, fall for misinformation, allow attention to fragment rather than focus, but those mental temptations pose dangers only for the untrained mind. Learning the mental discipline to use thinking tools without losing focus is one of the prices I am glad to pay to gain what the Web has to offer.

Those people who do not gain fundamental literacies of attention, crap detection, participation, collaboration, and network awareness are in danger of all the pitfalls critics point out — shallowness, credulity, distraction, alienation, addiction. I worry about the billions of people who are gaining access to the Net without the slightest clue about how to find knowledge and verify it for accuracy, how to advocate and participate rather than passively consume, how to discipline and deploy attention in an always-on milieu, how and why to use those privacy protections that remain available in an increasingly intrusive environment.

I have concluded that the realities of my own life as a professional writer — if the words didn't go out, the money didn't come in — drove me to evolve a set of methods and disciplines. I know that others have mastered far beyond my own practice the mental habits that I've stumbled upon, and I suspect that learning these skills is less difficult than learning long division. I urge researchers and educators to look more systematically where I'm pointing.

When I started out as a freelance writer in the 1970s, my most important tools were a library card, a typewriter, a notebook, and a telephone. In the early 1980s, I became interested in the people at Xerox Palo Alto Research Center who were using computers to edit text without physically cutting, pasting, and retyping pages.

Through PARC I discovered Douglas Engelbart, who had spent the first decade of his career trying to convince somebody, anybody, that using computers to augment human intellect was not a crazy idea. Engelbart set out in the early 1960s to demonstrate that computers could be used to automate low-level cognitive support tasks like cutting, pasting, revising text, and also to enable intellectual tools like the hyperlink that weren't possible with Gutenberg-era technology.

He was convinced that this new way to use computers could lead to "increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insoluble." Important caveats and unpredicted side-effects notwithstanding, Engelbart's forecasts have come to pass in ways that surprised him. What did not surprise him was the importance of both the know-how and how-to-know that unlock the opportunities afforded by augmentation technology.

From the beginning, Engelbart emphasized that the hardware and software created at his Stanford Research Institute laboratory, from the mouse to the hyperlink to the word processor, were part of a system that included "humans, language, artifacts, methodology and training." Long before the Web came along, Engelbart was frustrated that so much progress had been made in the capabilities of the artifacts, but so little study had been devoted to advancing the language, methodology and training — the literacies that necessarily accompany the technical capabilities

Attention is the fundamental literacy. Every second I spend online, I make decisions about where to spend my attention. Should I devote any mindshare at all to this comment or that headline? — a question I need to answer each time an attractive link catches my eye. Simply becoming aware of the fact that life online requires this kind of decision-making was my first step in learning to tune a fundamental filter on what I allow into my head — a filter that is under my control only if I practice controlling it. The second level of decision-making is whether I want to open a tab on my browser because I decided that this item will be worth my time tomorrow. The third decision: do I bookmark this site because I am interested in the subject and might want to reference it at some unspecified future time? Online attention-taming begins with what meditators call "mindfulness" — the simple, self-influencing awareness of how attention wanders.

Life online is not solitary. It's social. When I tag and bookmark a Website, a video, an image, I make my decisions visible to others. I take advantage of similar knowledge curation undertaken by others when I start learning a topic by exploring bookmarks, find an image to communicate an idea by searching for a tag. Knowledge sharing and collective action involve collaborative literacies.

Crap detection — Hemingway's name for what digital librarians call credibility assessment — is another essential literacy. If all schoolchildren could learn one skill before they go online for the first time, I think it should be the ability to find the answer to any question and the skills necessary to determine whether the answer is accurate or not.

Network awareness, from the strength of weak ties and the nature of small-world networks to the power of publics and the how and why of changing Facebook privacy settings, would be the next literacy I would teach, after crap detection. Networks aren't magic, and knowing the principles by which they operate confers power on the knowledgeable. How could people NOT use the Internet in muddled, frazzled, fractured ways when hardly anybody instructs anybody else about how to use the Net salubriously? It is inevitable that people will use the Net in ways that influence how they think and what they think.

It is not inevitable that these influences will be destructive. The health of the online commons will depend on whether more than a tiny minority of Net users become literate Netizens.

Catalyst, Information Technology Startups, EDventure Holdings; Former Chariman,Electronic Frontier Foundation and ICANN; Author: Release 2.1


I love the Internet. It's a great tool precisely because it is so content — and value-free. Anyone can use it for his own purposes, good or bad, big or small, trivial or important. It impartially transmits all kinds of content, one-way or two-way or broadcast, public or private, text or video or sound or data.

But it does have one overwhelming feature: immediacy. (And when the immediacy is ruptured, its users gnash their teeth.) That immediacy is seductive: You can get instant answers, instant responses. If you're lonely, you can go online and find someone to chat with. If you want business, you can send out an e-mail blast and get at least a few responses — a .002 response rate means 200 messages back (including some hate mail) for a small list. If you want to do good, there are thousands of good causes competing for your attention at the click of your mouse.

But sometimes I think much of what we get on the Internet is empty calories. It's sugar — short videos, pokes from friends, blog posts, Twitter posts (even blogs seem longwinded now), pop-ups and visualizations…Sugar is so much easier to digest, so enticing…and ultimately, it leaves us hungrier than before.

Worse than that, over a long period, many of us are genetically disposed to lose our capability to digest sugar if we consume too much of it. It makes us sick long-term, as well as giving us indigestion and hypoglycemic fits. Could that be true of information sugar as well? Will we become allergic to it even as we crave it? And what will serve as information insulin?

In the spirit of brevity if not immediacy, I leave it to the reader to ponder these questions.

Co-founder of Wikipedia and Citizendium


The instant availability of an ocean of information has been an epoch-making boon to humanity. But has the resulting information overload also deeply changed how we think? Has it changed the nature of the self? Has it even — as some have suggested — radically altered the relationship of the individual and society? These are important philosophical questions, but vague and slippery, and I hope to clarify them.

The Internet is changing how we think, it is suggested. But how is it, precisely? One central feature of the "new mind" is that it is spread too thin. But what does that mean?

In functional terms, being spread too thin means we have too many Websites to visit, we get too many messages, and too much is "happening" online and in other media that we feel compelled take on board. Many of us lack effective strategies for organizing our time in the face of this onslaught. This makes us constantly distracted and unfocused, and less able to perform heavy intellectual tasks. Among other things, or so some have confessed, we cannot focus long enough to read whole books. We feel unmoored and we flow along helplessly wherever the fast-moving digital flood carries us.

We do? Well — some of us do, evidently.

Some observers speak of "where we are going," or of how "our minds" are being changed by information overload, apparently despite ourselves. Their discussions make erstwhile free agents mere subjects of powerful new forces, and the only question is where those forces are taking us. I don't share the assumption here. When I read the title of Nick Carr's essay, "Is Google Making Us Stupid?" I immediately thought, "Speak for yourself." It seems to me that in discussions like Carr's, it is assumed that intellectual control has already been ceded — but that strikes me as being a cause, not a symptom, of the problem Carr bemoans. After all, the exercise of freedom requires focus and attention, and the ur-event of the will is precisely focus itself. Carr unwittingly confessed for too many of us a moral failing, a vice; the old name for it is intemperance. (In the older, broader sense, contrasted with sophrosyne, moderation or self-control.) And, as with so much of vice, we want to blame it on anything but ourselves.

Is it really true that we no longer have any choice but to be intemperate in how we spend our time, in the face of the temptations and shrill demands of networked digital media? New media are not that powerful. We still retain free will, which is the ability to focus, deliberate, and act on the results of our own deliberations. If we want to spend hours reading books, we still possess that freedom. Only philosophical argument could establish that information overload has deprived us of our agency. The claim at root is philosophical, not empirical.

My interlocutors might cleverly reply that we now, in the age of Facebook and Wikipedia, do still deliberate, but collectively. In other words, for example, we vote stuff up or down on Digg, del.icio.us, and Slashdot, and then we might feel ourselves obligated — if we're participating as true believers — to pay special attention to the top-voted items. Similarly, we attempt to reach "consensus" on Wikipedia, and — again, if participating as true believers — endorse the end result as credible. To the extent that our time is thus directed by social networks, engaged in collective deliberation, then we are subjugated to a "collective will," something like Rousseau's notion of a general will. To the extent that we plug in, we become merely another part of the network. That, anyway, is how I would reconstruct the collectivist-determinist position that is opposed to my own individualist-libertarian one.

But we obviously have the freedom not to participate in such networks. And we have the freedom to consume the output of such networks selectively, and holding our noses — to participate, we needn't be true believers. So it is very hard for me to take the "woe is us, we're growing stupid and collectivized like sheep" narrative seriously. If you feel yourself growing ovine, bleat for yourself.

I get the sense that many writers on these issues aren't much bothered by the un-focusing, de-liberating effects of joining the Hive Mind. Don Tapscott has suggested that the instant availability of information means we don't have to "memorize" anything anymore — just consult Google and Wikipedia, the brains of the Hive Mind. Clay Shirky seems to believe that in the future we will be enculturated not by reading dusty old books but in something like online fora, plugged into the ephemera of a group mind, as it were. But surely, if we were to act as either of these college teachers recommend, we'd become a bunch of ignoramuses. Indeed, perhaps that's what social networks are turning too many kids into, as Mark Bauerlein argues cogently in The Dumbest Generation. (For the record, I've started homeschooling my own little boy.)

The issues here are much older than the Internet. They echo the debate between progressivism and traditionalism found in philosophy of education: should children be educated primarily so as fit in well in society, or should the focus be on training minds for critical thinking and filling them with knowledge? For many decades before the advent of the Internet, educational progressivists have insisted that, in our rapidly changing world, knowing mere facts is not what is important, because knowledge quickly becomes outdated; rather, being able to collaborate and solve problems together is what is important. Social networks have reinforced this ideology, by seeming to make knowledge and judgment collective functions. But the progressivist position on the importance of learning facts and training individual judgment withers under scrutiny, and, pace Tapscott and Shirky, events of the last decade have not made it more durable.

In sum, there are two basic issues here. Do we have any choice about ceding control of the self to an increasingly compelling "Hive Mind"? Yes. And should we cede such control, or instead strive, temperately, to develop our own minds very well and direct our own attention carefully? The answer, I think, is obvious.

Professor, Harvard University, Director, Personal Genome Project.


If time did permit, I'd begin with the "How" of "How is Internet changing the way that we think?" Not "how much?" or "in what manner?", but "for what purpose?" "To be, that is the question."  

Does the Internet pose an existential risk to all known intelligence in the universe or a path to survival? Yes; we see sea change from I-Ching to e-Change.

Yes; it (IT) consumes 100 billion watts, but this is only 0.7% of human power consumption.

Yes; it might fragment the attention span of the Twitter generation. (For my world, congenitally shattered by narcolepsy and dyslexia, reading/chatting online in 1968 was no big deal). 

Before cuneiform, we revered the epic poet. Before Gutenberg, we exalted good handwriting. We still gasp at feats of linear memory, Lu Chao reciting 67890 digits of π  or Kim Peek's recall of 12,000 books (60 gigabytes) — even though pathetic compared to  the Internet of 10 exabytes (double that in 5 years).

But the Internet isn't amazing for storage (or math), but for connections. Going from footnotes to hypertext to search-engines dramatically opens doors for evidence-based-thinking, modeling, and collaboration. It transforms itself from mere text to Goggles for places and Picasa for faces

But still it can't do things that Einstein and Curie could. Primate brains changed dramatically from early apes at 400 cc to Habilis at 750 cc to Neanderthal at 1500 cc.

"How did THAT change the way that we think?" and "For what purpose?" How will we think to rebuild the ozone after the next nearby supernova? or nudge the next earth-targeted asteroid? Or contain a pandemic in our dense and well-mixed population? And how will we prepare for those rare events by solving today's fuel, food, psychological and poverty problems, which prevent 6.7 billion brains from achieving our potential? The answer is blowin' in the Internet wind.

Physicist, Harvard University; Author, Warped Passages


The plural of anecdotes is not data — but anecdotes are all I have. We don't yet understand how we think or what it means to change the way we think. Scientists are making inroads and ultimately hope to understand much more. But right now all I and my fellow contributors can do are make observations and generalize.

We don't even know if the Internet changes the way we read. It certainly changes how we read, as it changes how we do many aspects of our work. Maybe it ultimately changes how our brains process written information but we don't yet know. Still, the question of how the Internet changes how we think is an enormous problem, one that anecdotes might help us understand. So I'll tell a couple (if I can focus long enough to do so.)

Someone pointed out to me once that he, like me, never uses a bookmark in a book. I always attributed my negligence to disorganization and laziness — the few times I attempted to use a bookmark I promptly misplaced it. — But what I realized after this was pointed out is that not using — bookmarks was my choice. It doesn't make sense to find a place in a book that you technically have read but that is so far from your memory that you don't remember having read it. By not using a bookmark, I was guaranteed to return to the last continuous section of text that actually made a dent in my brain.

With the Internet we tend to absorb multiple pieces of information about whatever topic we decide we're interested in. Online, we search. In fact Marvin Minsky recently told me that he prefers reading on an electronic device in general because he values the search function. And I certainly often do too. In fact I tend to remember the answer to the pointed pieces of information I ask about on the Internet better than I do when reading a long book. But there is also the danger that something valuable about reading in a linear fashion, absorbing information internally, and processing it as we go along is lost with the Internet or even electronic devices, where it is too easy to cheat by searching.

One aspect of reading a newspaper that I've already lost a lot of is the randomness that comes with reading in print rather than online. Today I read the articles that I know will interest me when I'm staring at a computer screen and have to click to get to the actual article. When I read print papers — something I do less and less-my eyes are sometimes drawn to an interesting piece — or even advertisement — that I would never have chosen to look for. Despite its breadth, and the fact that I can be so readily distracted, I still use the Internet in a targeted fashion.

So why don't I stick to print media? The Internet is great for disorganized people like me who don't want to throw something away for fear of losing something valuable they missed. I love knowing everything is still on line and that I can find it. I hate newspapers piling up. I love not having to be in an office to check books. I can make progress at home, on a train, or on a plane (when there is enough room between rows to open my computer). Of course as a theoretical physicist I could do that before as well — it just meant carrying a lot more weight.

And I do often take advantage of the Internet's breadth, even if it is a little more directed. A friend might send me to a Web site. Or I might just need or want to learn about some new topic. The Internet also allows me to be bolder. I can quickly get up to speed on a topic I previously knew nothing about. I can check facts and I can learn other's points of view on any subject I decide is interesting. I can write about subjects I wouldn't have dared to touch before, since I can quickly find out the context in a way that was previously much more difficult to access.

Which brings me back to the idea of the quote "the plural of anecdotes is not data." I thought I should check who deserves the attribution. It's not entirely clear but it might go back to a pharmacologist named Frank Kotsonis, who was writing about the effects of aspartame. I find this particularly funny because I stopped consuming aspartame due to my personal anecdotal evidence that it made me focus less well. But I digress.

Here's the truly funny aspect of the quote I discovered with my Google search. The original quote from the Berkeley political scientist Raymond Wolfinger was exactly the opposite, "The plural of anecdotes is data." I'm guessing this depends on what kind of science you do.

The fact is that the Internet provides a wealth of information. It doesn't yet organize it all or process it or arrange for scientific conclusions. The Internet allows us (as a group) to believe both facts and their opposites; we'll all find supporting evidence or opinions.

But we can attend talks without being physically present and work with people we've never met in person. We have access to all physics papers as they are churned out but we still have to figure out which are interesting and process what they say.

I don't know how differently we think. But we certainly work differently and do so at a different pace. We can learn many anecdotes that aren't yet data.

Though all those distracting emails and Web sites can make it hard to focus!

Psychologist; Director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin; Author, Gut Feelings


When I came to the Center for Advanced Study in Palo Alto in the fall of 1989, I peered into my new cabin-like office. What struck me was the complete absence of technology. No telephone, e-mail, or other communication facilitators. Nothing could interrupt my thoughts. Technology could be accessed outside the offices whenever one wished, but it was not allowed to enter through the door at its own will. This protective belt was deliberately designed to make sure that scholars had time to think, and to think deeply.

In the meantime, the Center, like other institutions, has surrendered to technology. Today, people's minds are in a state of constant alert, waiting for the next e-mail, the next SMS, as if these will deliver the final, earth-shattering insight. I find it surprising that scholars in the "thinking profession" would so easily let their attention be controlled from the outside, minute by minute, just like letting a cell phone interrupt a good conversation. Were messages to pop up on my screen every second, I would not be able to think straight. Maintaining the Center's spirit, I check my email only once a day, and keep my cell phone switched off unless I make a call. An hour or two without interruption are heaven for me.

But the Internet can be used in an active rather than a reactive way, that is, not letting it determine how long we can think and when we have to stop. The question is, does an active use of the Internet change our way of thinking? I believe so. The Internet shifts our cognitive functions from searching for information inside the mind towards searching outside the mind. It is not the first technology to do so.

Consider the invention that changed human mental life more than anything else: writing, and subsequently, the printing press. Writing made analysis possible; with writing, one can compare texts, which is difficult in an oral tradition. Writing also made exactitude possible, as in higher-order arithmetic — without any written form, these mental skills quickly meet their limits. But writing makes long-term memory less important than it once was, and schools have largely replaced the art of memorization by training in reading and writing.

Most of us can no longer memorize hour-long folktales and songs as in an oral tradition. The average modern mind has a poorly trained long-term memory, forgets rather quickly, and searches for information more in outside sources such as books instead inside memory. The Internet has amplified this trend of shifting knowledge from the inside to the outside, and taught us new strategies for finding what one wants using search machines.

This is not to say that before writing, the printing press, and the Internet, our minds did not have the ability to retrieve information from outside sources. But these sources were other people, and the skills were social, such as the art of persuasion and conversation. To retrieve information from Wikipedia, in contrast, social skills are no longer needed.

The Internet is essentially a huge storage room of information, and we are in the process of outsourcing information storage and retrieval from mind to computer, just as many of us have already outsourced the ability of doing mental arithmetic to the pocket calculator. We may loose some skills in this process, such as the ability to concentrate over an extended period of time and storing large amounts of information in long-term memory, but the Internet is also teaching us new skills for accessing information.

It is important to realize that mentality and technology are one extended system. The Internet is a kind of collective memory, to which our minds will adapt until a new technology eventually replaces it. Then we will begin outsourcing other cognitive abilities, and hopefully, learn new ones.

| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |

next >