| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |

next >

Gerontologist; Chief Science Officer. SENS Foundation; Author, Ending Aging


The Net changes the way I think in a bunch of ways that apply to more or less everyone, and especially to the group that have been asked to write these essays, but there's one impact it has on me that is probably rarer. And it's not a change, but an avoidance of a change.

Before I switched to biology, I was a computer scientist; I have been using email regularly since I was a student in the early 1980s. And I like email — a lot. Email lets you think before you speak, on those not-so-rare occasions when doing so would be a good idea. Email waits patiently for you to read it, until you feel like being interrupted — and the sender doesn't get offended if that isn't until a few hours, or even a day, after they sent it. Email lets you speak in real sentences when you want to and not when you don't feel the need.

What might it be that I'm thinking of, that so offensively lacks all these qualities? No, not face-to-face interaction: I am as gregarious as anyone. Not snail mail either, though I certainly admit that I use that medium far more rarely now than I used to a decade or two ago. No: the relevant object of my distaste is that greatest curse of the 21st century, the cell phone.

It would take more words than we have been allowed for these pieces to do full justice to my loathing of the cell phone, so I won't try. But you can probably guess that it doesn't stop at the irritation caused when someone's phone goes off in the middle of a lecture. A lot of it is the sheer rudeness that cell phones force their owners to commit in situations where no such problem would otherwise exist: either abruptly suspending a face-to-face conversation to take a call, or summarily telling someone to call back because the person you're talking to is more important. But most of it is the contrast with the civilised, relaxed, entirely adequate form of communication that I so prefer: email.

Yes, yes, you're going to give me the line that one can always turn one's phone off. That's nonsense. If there's one thing worse than being rung when you don't want, it's having someone ask you to ring them and doing so, and just getting their bleeping (excuse the pun) voicemail. Hello? If I wanted to tell you something without hearing your response at once, I'd have bleeping sent you a bleeping email as I originally wanted to! I have never been willing to put others in a situation I don't wish to be in myself, and I'm not going to start now.

So, what do I mean by "rescuing" in my title? Simply this: as the cell phone has become more and more ubiquitous, and lack of it more and more surprising — it seems close to overtaking the driving license in that regard in the USA, and in mainland Europe the addiction is even worse — I have come under more and more pressure to conform. And so far, I have resisted — and there is every sign that I shall continue to do so. How? Simply because I'm very, very well-behaved with email. With the few percent of emails I receive to which I want to take time to compose a reply, I take that time — but for the great majority, I'm fast. Really fast. It's the best of both worlds: negligible slowdown in communication, without the loss of that resource so rare and valuble to the busy high-achiever, occasional but reliable solitude. And also without the other drawbacks I've mentioned. Put simply: I'm easy enough to interact with using email that people have let me be. If it didn't exist, or if it were not so ubiquitous, I'd have been forced long ago to submit to the tyranny of the cell phone, and I would be an altogether less nice person to know as a result.

Editor, Infectious Greed; Senior Fellow, Kauffman Foundation


Three friends have told me recently that during their just-completed holidays they unplugged from the Internet and had big, deep thoughts. This worries me. First, three data points means it's a trend, so maybe I should be doing it. Second, I wonder if I could disconnect from the Internet long enough to have big, deep thoughts. Third, like most people I know, I worry that even if I disconnect long enough, my info-krill-addled brain is no longer capable of big, deep thoughts (which I will henceforth calls BDTs).

Could I quit? At some level it seems a silly question, like asking how I feel about taking a breathing hiatus, or if on Tuesdays I would give up gravity. The Internet no longer feels involuntary when it comes to thinking. Instead, it feels more like the sort of thing that when you make a conscious effort to stop doing it bad things happen. As a kid I once swore off gravity and jumped from a barn hay mow, resulting in a sprained ankle. Similarly, a good friend of mine sometimes asks fellow golfers before a swing whether they breathe in or they breathe out. The next swing is inevitably horrible as the golfer sends a ball screaming into receptive underbrush.

Could I quit the Internet if it meant I would have more BDTs? Sure, I suppose I could, but I'm not convinced it would happen. First, the Internet is, for me, a kind of internal cognition combustion engine, something that vastly accelerates my ability to travel vast landscapes. Without it it would be much more difficult to compare, say, theories about complexity, cell phones and bee colony collapse disorder rather than writing an overdue paper, or to count hotel room in default in California versus Washington state. (In case you're curious, there are roughly two-times as many defaulted hotel rooms in California as there total hotel rooms in Seattle.)

In saying I could quit, but not quitting (even if quitting meant more BDTs), I could be accused of cynicism. I get to tell myself I could quit and have BDTs, without actually testing if or when I did quit whether I had said thoughts. That has a great deal of appeal, not least because I get the frisson of contemplating BDTs without actually going to the trouble of a) giving up the Internet, and b) seeing if I actually have the aforementioned thoughts.

Because like most people I know, I worry noisily and loudly that the Internet has made me incapable of having BDTs. I feel sure that I used to have such things, but for some reason I no longer do. Maybe the Internet has damaged me — I've informed myself to death! — to the point that I don't know what big, deep thoughts are, or that the brain chemicals formerly responsible for their emergence are now doing something else. Then again, this smacks of historical romanticism, like remembering the skies as always being blue and summers as eternal when you were eight years old.

So, as much as I kind of want to believe people who say they have big, deep thoughts when they disconnect from the web, I don't trust them. It reminds me of a doctor declaring herself/himself Amish for the day, and then heading from New York to Boston by horse & carriage with a hemorrhaging patient. Granted, you could do it, and some patients might even survive, but it isn't prudent or necessary. It seems instead a kind of public exercise in macho symbolism, like Iggy Pop carving something in his chest, a way of bloodily demonstrating that you're different, or even a sign of outright crankishness. Look at me! I'm thinking! No Internet!

If we know anything about knowledge, about innovation, and therefore about coming up with BDTs, it is that it is cumulative, an accretive process of happening upon, connecting, and assembling, like an infinite erector set, not just a few pretty I-beams strewn about on a concrete floor. But if BDTs were just about connecting things then the Internet would only be mildly interesting in changing the way I think. Libraries connect things, people connect things, and connections can even happen, yes, while sitting disconnected from the Internet under an apple tree somewhere. Here is the difference: The difference is that the Internet increases the speed and frequency of these connections & collisions, while dropping the cost of both to near zero.

It is that combination — cheap connections plus cheap collisions — that has done violence to the way I think. It is like having a private particle accelerator on my desktop, a way of throwing things into violent juxtaposition, and then the resulting collisions reordering my thinking. The result is new particles — ideas! — some of which are BDTs, and many of which are nonsense. But the democratization of connections, collisions and therefore thinking is historically unprecedented. We are the first generation to have the information equivalent of the Large Hadron Collider for ideas. And if that doesn't change the way you think, nothing will.

Psychologist and Neuroscientist, University of Maryland; Author, Laughter


At the end of my college lectures, students immediately flip-open their cellphones, checking for calls and texts. In the cafeteria, I observe students standing in queues, texting, neglecting fellow students two feet away. Late one afternoon, I noticed six students wandering up-and-down a long hallway while using cellphones, somehow avoiding collision, like ships cruising in the night, lost in a fog of conversation, or like creatures from The Night of the Living Dead. A student reported emailing during a "computer date," not leaving her room on a Saturday night. Paradoxically, these students were both socially engaged and socially isolated.

My first encounter with people using unseen phone headsets was startling; they walked through an air terminal apparently engaging in soliloquies or responding to hallucinated voices. More is involved than the displacement of snail mail by email, a topic of past decades; face-to-face encounters are being displaced by relations with a remote, disembodied conversant somewhere in cyberspace. These experiences forced a rethinking of my views about communication, technological and biological, ancient and modern, and prompted research projects examining the emotional impact, novelty and evolution of social media.

The gold standard for interpersonal communication is face-to-face conversation in which you can both see and hear your conversant. In several studies, I contrasted this ancestral audiovisual medium with cellphone use in which you hear but do not see your conversant, and texting in which you neither see nor hear your conversant. Conversations between deaf signers provided a medium in which individuals see but not hear their conversant.

The telephone, cell or land line, provides a purely auditory medium that transmits two-way vocal information, including the prosodic (affective) component of speech, but filters the visual signals of gestures, tears, smiles and other facial expressions. The purely auditory medium of the telephone is, itself, socially and emotionally potent, generating smiles and laughs in remote individuals, a point confirmed by observation of 1,000 solitary people in public places. Unless using a cellphone, isolated people are essentially smile less, laugh less and speechless. (We confirmed the obvious because the obvious is sometimes wrong.) Constant, emotionally rewarding vocal contact with select, distant conversants is a significant contributor to the worldwide commercial success of cellphones. Radio comedy and drama further demonstrate the power of a purely auditory medium, even when directed one-way from performer to audience. While appreciating the inventions of the telephone and broadcasting, it occurred to me that the ability to contact unseen conversants is a basic property of the auditory sense; it's as old as our species and occurs every time that we speak with someone in the dark or not in our line of sight. Phones become important when people are beyond shouting distance.

The emotional communication between individuals who can see but not hear their conversant was explored in a study of deaf individuals with collaborator Karen Emmorey. We observed vocal laughter and associated social variables in conversations between deaf signers using American Sign Language. Despite their inability to hear their conversational partner, deaf signers laughed at the same places in the stream of signed speech, at similar material, and showed the same gender patterns of laughter as hearing individuals during vocal conversations. An emotionally rich dialogue can be, therefore, conducted with an exclusively visual medium that filters auditory signals and passes only visual ones. Less nuanced visual communication is ancient and used when communicating beyond vocal range via such signals as gestures, flags, lights, mirrors, or smoke.

Text messaging, whether meaty emails or telegraphic tweets, involves conversants who can neither see nor hear each other and are not interacting in real time. My research team examined emotional communication online by analyzing the placement of 1,000 emoticons in Website text messages. Emoticons resembled conversational laughter in their placement in the text-stream — they seldom interrupted phrases. For example, you may text, "You are going where on vacation? Lol," but not "You are — lol — going where on vacation?"

Technophiles writing about text messaging sometime justify emoticon use as a response to the "narrowing of band-width" characteristic of text messaging, ignoring that text viewed on a computer monitor or cellphone is essentially identical to that of a printed page. I suspect that emoticon use is a likely symptom of the limited literary prowess of texters. Know what I mean? Lol. Readers seeking the literary subtleties of irony, paradox, sarcasm, or sweet sorrow are unlikely to find it in text messages. Although not providing immediate, long distance contact, physically transported handwritten text messages have existed since clay tables and papyrus, and could be faster than commonly thought. Unless checked frequently, electronic text messaging may not be faster than the postal service of 18th Century London that had up to six deliveries per day and offered the possibility of a same-day receipt and response. A century later, telegraphy provided an even faster pre-Internet text option.

The basic cellphone has morphed into a powerful, mobile, multimedia communication device and computer terminal that is a major driver of Internet society. It gives immediate, constant contact with select, distant conversants, and can tell you where you are, where you should go next, how to get there, provide diversions while waiting, and document your journey with text, snaps and video images. For some, this is enhanced reality, but it comes at the price of the here-and-now. Whatever your opinion and level of engagement, the cellphone and related Internet devices are profound social prostheses — almost brain implants — that have changed our lives and culture.

Physics, University of Illinois at Urbana-Champaign


Although I used the Internet back when it was just Arpanet, and even earlier as a teenager using a teletype to log into a state-of-the-art Honeywell mainframe from my school, I don't believe my way of thinking was changed by the Internet until around 2000. Why not?

The answer, I suspect, is the fantastic benefit that comes from massive connectivity and the resulting emergent phenomena. Back in my school days, the Internet was linear, predictable, and boring. It never talked back. When I hacked into the computer at MIT running an early symbolic manipulator program, something that could do algebra in a painfully inadequate way, I just used the Internet as a perfectly predictable tool. In my day-to-day life as a scientist, I mostly still do.

Back in 1996, I co-founded a software company that built its products and operated essentially entirely through the Internet; whether this was more efficient than a regular "bricks-and-mortar" company is debatable, but the fact was that through this medium, fabulously gifted individuals were able to participate in this experiment, who would never have dreamed of relocating for work like this. But this was still linear, predictable, and an essentially uninteresting use of the Internet.

No, for me, the theoretical physicist geek from central casting, the Internet is changing the way I think, because its "whole is greater than the sum of its parts". When I was a child, they told us that we would be living on the moon, that we would have anti-gravity jet packs, and video phones. They lied about everything but the video phones. With private blogs, Skype and a $40 Webcam, I can collaborate with my colleagues, write equations on my blackboard, and built networks of thought that stagger me with their effectiveness. My students and I work together so effectively through the Internet that its always-on-library dominates our discussions and helps us find the sharp questions that drive our research and thinking infinitely faster than before.

My day job is to make discoveries through thought, principally by exploiting analogies through acts of intellectual arbitrage. When we find two analogous questions in what were previously perceived to be unrelated fields, one field will invariably be more developed than the other, and so there is a scientific opportunity. This is how physicists go hunting. The Internet has become a better tool than the old paper scientific literature, because it responds in real time.

To see why this is a big deal for me, consider the following "homework hack". You want to become an instant expert in something that matters to you: maybe a homework assignment, maybe researching a life-threatening disease afflicting someone close to you. You can research it on the Internet using a search engine… but as you know, you can search, but you can't really find. Google gives you unstructured information, but for a young person in a hurry, that is simply not good enough. Search engines are linear, predictable and essentially an uninteresting way to use the Internet.

Instead, try the following hack. Step 1: Make a Wiki page on the topic. Step 2: fill it with complete nonsense. Step 3: Wait a few days. Step 4: Visit the Wiki page, and harvest the results of what generous and anonymous souls from — well, who cares where they are from or who they are? — have corrected, contributed and enhanced in, one presumes, fits of righteous indignation. It really works. I know, because I have seen both sides of this transaction. There you have it: the emergence of a truly global, collective entity, something that has arisen from humans + Internet. It talks back.

This "homework hack" is, in reality, little more than the usual pattern of academic discourse, but carried out, in William Gibson's memorable phrase, with "one thumb permanently on the fast-forward button". Speed matters, because life is short. The next generation of professional thinkers already have all the right instincts about the infinite library that is their external mind, accessible in real time, and capable of accelerating the already Lamarckian process of evolution in thought and knowledge on timescales that really matter. I'm starting to get it too.

Roughly three billion years ago, microbial life invented the Internet and Lamarckian evolution. For them, the information is stored in molecules, and is recorded in genes that are transmitted between consenting microbes by a variety of mechanisms that we are still uncovering. Want to know how to become a more virulent microbial pathogen? Download the gene! Want to know how to hotwire a motorcycle? Go to the Website! So much quicker than random trial-and-error evolution, and it works … right now! And your children's always-on community of friends, texting "lol"s and other quick messages that really say "I'm here, I'm your friend, let's have a party" is no different than the quorum sensing of microbes, counting their numbers so that they can do something collectively, such as invade a host or grow a fruiting body from a biofilm.

I'm starting to think like the Internet, starting to think like biology. My thinking is better, faster, cheaper and more evolvable because of the Internet. And so is yours. You just don't know it yet.

Neuroscientist; Professor, University of Washington; Author, Global Fever


"The way you think" is nicely ambiguous. It could be a worldview: the way I
think about climate change has certainly been changed by the access to
knowledge and ideas afforded by the Internet. There is no way that I could
have gotten up to speed in climate science without the Web. It has literally
changed my view of the world and its future prospects.

But being a physiologist, I first assumed that "The way you think" was asking about process (changing one sort of stuff into another) and how my thought process has been changed by the Internet. And as it happens, I can sketch out how that might work.

A thinking process can pop up new ideas or make surprising new connections between old thoughts. So in order to explore how the Internet changes the thinking process, consider for a moment how thought normally works.

Assembling a new combination ("associations") may be relatively easy. The problem is whether the parts hang together, whether they cohere. We get a nightly reminder of an incoherent thought process from our dreams, which are full of people, places, and occasions that do not hang together very well. Awake, an incoherent collection is what we often start with, with the mind's back office shaping it up into the coherent version that we finally become aware of — and occasionally speak aloud. Without such intellectual constructs, there is, William James said a century ago, only "a bloomin' buzzin' confusion."

To keep a half-dozen concepts from blending together like a summer drink, you need some mental structuring. In saying "I think I saw him leave to go home" with its four verbs, you are nesting three sentences inside a fourth. We also structure plans (not just anticipation but with contingencies), play games (not just a romp but with arbitrary rules), create structured music (not just rhythm but with harmony and recursion), and employ logic (in long chains).

And atop this structured capability, we have a fascination with discovering how things hang together, as seen when we seek hidden patterns within seeming chaos — say, doing crossword and jigsaw puzzles, doing history, doing science, and trying to appreciate a joke. Our long train of connected thoughts is why our consciousness is so different from what came before. Structuring with quality control made it possible for us to think about the past, and to speculate about the future, in far more depth than if we were ruled by instinct and memory alone.

I'll use creating a novel sentence for my examples but it's much the same for new thoughts and action plans. Quality is a matter of the degree of coherence, both within a sentence and within an enlarged context. Quality control without a supervising intelligence occurs in nature.

On a millennial time scale, we see a new species evolving to better fit an ecological niche. It's a copying competition biased by the environment, making some variants reproduce better than others.

On the time scale of the days to weeks after our autumn flu shot, we see the immune response shaping up a better and better antibody to fit the invading molecule. Again, this is a Darwinian copying competition improving quality.

My favorite creative process, operating in milliseconds to minutes, can create a new thought that is spot on, first time out.

All are examples of the universal Darwinian process. Though often summarized by Darwin's phrase, "natural selection," it is really a process with six essential ingredients. So far as I can tell, you need:

1. a characteristic pattern (A, the stand-in for the long form — something like a bar code) that can 2. be copied, with 3. occasional variations (A') or compounding, where 4. populations of A and A' clones compete for a limited territory, their relative success biased by 5. a multifaceted environment of, say, memories and instincts under which some variants do better than others (Darwin's natural selection), and where 6. the next round of variants is primarily based on the more successful of the current generation (Darwin's inheritance principle).

Such recursion is how you bootstrap quality, why we can start with subconscious thoughts as jumbled as our night time dreams and still end up with a sentence of quality or a chain of logic — or anticipate the punch line of a joke.

You need a quality bootstrapping mechanism in order to figure out what to do with leftovers in the refrigerator; with successive attempts running through your head as you stand there with the door open, you can often find a "quality" scheme (that is, one that doesn't require another trip to the grocery store).

So how has the Internet connectedness changed the Darwinian creative process? For the data-gathering stage, it affords us more variants that others have already checked for quality. Search engine speed provides them faster, so that a number can be gathered within the time constraints of working memory — say, ten minutes. When we think we have a good-enough assembly, we can do a quick search to see what others have said about near-fits to our candidate. Typically, we will be forced to conclude that our candidate isn't quite right, and further Internet searches will guide us in create new variant formulations.

We can do all of this without the Internet, but it takes time — often much longer than the time span of working memory. To then think about the modified situation requires refreshing working memory with the old stuff. The sheer speed of checking out possibilities can minimize the need for that. Even if one is working from a library carrel in the stacks, getting a PDF of an article by Wi-Fi is a lot faster than chasing around the stacks.

I recall how envious I was when the Berkeley astronomer Rich Muller described how they worked out the comet problem for explaining the timing of mass extinctions. He said that it wasn't a good week if they couldn't kill off one or two possibilities for how comets from the Oort Cloud might achieve orbits sufficient to strike the Earth. A candidate would either turn out to be physically impossible or to make predictions that conflicted with observations. Nothing in brain research can possibly work that fast. It takes us decades to discover better explanations and move on. They could do it in a week.

And that's how I have been feeling about the Internet's expansion of quick access to knowledge and ideas. You can stand on the shoulders of a lot more giants at the same time.

Ophthalmologist and Neurobiologist, University of California, Davis


The Internet is the greatest detractor to serious thinking since the invention of television. It can devour time in all sorts of frivolous ways from chat rooms to video games. And what better way to interrupt one's thought processes than by an intermittent stream of incoming email messages? Moreover, the Internet has made inter-personal communication much more circumscribed than in the pre-Internet era. What you write today may come back to haunt you tomorrow. The recent brouhaha following the revelations of the climate scientists' emails is an excellent case in point.

So while the Internet provides a means for rapidly communicating with colleagues globally, the sophisticated user will rarely reveal true thoughts and feelings in such messages. Serious thinking requires honest and open communication and that is simply untenable on the Internet by those that value their professional reputation.

The one area where the Internet could be considered to be an aid to thinking is the rapid procurement of new information. But even here this is more illusionary than real. Yes the simple act of typing in a few words into a search engine will virtually instantaneously produce links related to the topic at hand. But the vetting of the accuracy of information obtained in this manner is not a simple manner. What one often gets is no more than abstract summaries of lengthy articles. As a consequence, I suspect that the number of downloads of any given scientific paper has little relevance to the number of times that the entire article has been read from beginning to end. My advice is that if you want to do some seriously thinking than you better disconnect the Internet, phone and television set and try spending 24 hours in absolute solitude as was suggested in my 2006 Edge Annual Question response.

Professor of Evolutionary Biology, Reading University, England and The Santa Fe Institute


The Internet isn't changing the way I or anybody else thinks. We know this because we can still visit some people on Earth who don't have the Internet and they think the same way that we do. My general purpose thinking circuits are hard wired into my brain from genetic instructions honed over millions of years of natural selection. True, the brain is plastic, it responds to the way it is brought up by its user, or to the language it has been taught to speak, but its fundamental structure is not changed this way, except perhaps in extremis, maybe eight hours per day of computer games.

But the Internet does takes advantage of our appetites, and this changes our thoughts, if not the way we think. Our brains have appetites for thinking, learning, feeling, hearing and seeing. They like to be used. It is why we do crossword puzzles and brain-teasers, read books and visit art galleries, watch films, and play or listen to music. Our brain appetites act as spurs to action, in much the same way that our emotions do; or much the same way that our other appetites — for food and sex — do. Those of us throughout history who have acted on our world — even if just to wonder why fires start, why the wind blows out of the southwest, or what would happen if we combined heat with clay, will have been more successful than those of us who sat around waiting for things to happen.

So, the Internet is brain candy to me and, I suspect, to most of us — it slakes our appetite to keep our brain occupied. That moment when a search engine pops up its 1,278,000 search results to my query is a moment of pure injection of glucose into my brain. It loves it. It is why so many of us keep going back for more. Some think that this is why the Internet is going to make us lazy, less-literate, and less-numerate, that we will forget what lovely things books are, and so on. But even as brain candy I think the Internet's influence on these sorts of capabilities and pleasures is probably not as serious as the curmudgeons and troglodytes would have you believe. They will be the same people who grumbled about the telegraph, trains, the motorcar, the wireless, and television.

There are far more interesting ways that the Internet changes our thoughts, and especially the conclusions we draw, and it does this also by acting on our appetites. I speak of contagion, false beliefs, neuroses — especially medical and psychological — conspiracy theories, and narcissism. The technical point is this: the Internet tricks us into doing bad mathematics; it gets us to do a mathematical integration inside our brains that we don't know how to do. What? In mathematics, integration is a way of summing an infinite number of things. It is used to calculate quantities like volumes, areas, rates, and averages. Our brains evolved to judge risks, to assess likelihood or probabilities, to defend our minds against undue worry, and to infer what others are thinking, by sampling and summing or averaging across small groups of people, most probably the people in my tribe. They do this automatically, and normally without us even knowing about it.

In the past my assessment of the risk of being blown up by a terrorist, or of getting swine flu, or of my child being snatched by a pedophile on the way to school, was calculated from the steady input of information I would have received mainly from my small local group, because these were the people I spoke to or heard from and these were the people whose actions affected me.

What the Internet does, and what mass communication does more generally is to sample those inputs from the 6.8 billion people on Earth. But my brain is still considering that the inputs arose from my local community, because that is the case its assessment circuits were built for. That is what I mean by bad mathematics. My brain assumes a small denominator (that is the bottom number in a fraction) with the result that the answer to the question of how likely something is to happen is too big.

So, when I hear every day of children being snatched my brain gives me the wrong answer to the question of risk: it has divided a big number (the children snatched all over the world) by a small number (the tribe). Call this the 'Madeleine McCann' effect. We all heard months of coverage of this sad case of kidnapping — still unresolved — and although trivial compared to what the McCann's suffered, it has caused undue worry in the rest of us.

The effects of the bad mathematics don't stop with judging risks. Doing the integration wrong, means that contagion can leap across the Internet. Contagion is a form of risk assessment with an acutely worrying conclusion. Once it starts on the Internet, everyone's bad mathematics make it explode. So, do conspiracy theories: if it seems everyone is talking about something, it must be true! But this is just the wrong denominator again. Neuroses and false beliefs are buttressed: we all worry about our health and in the past would look around us and find that no one else is worrying or ill. But consult the Internet and 1,278,000 people (at least!) are worrying, and they've even developed Websites to talk about their worry. The 2009 swine flu pandemic has been a damp squib but you wouldn't have known that from the frenzy.

The bad mathematics can also give us a sense that we have something useful to say. We'd all like to be taken seriously and evolution has probably equipped us to think we are more effective than we really are, it seeds us with just that little bit of narcissism. A false belief perhaps but better for evolution to err on the side of getting us to believe in ourselves than not to. So, we go up on the Internet and make Websites, create Facebook pages, contribute to YouTube and write Web logs and, surprise, it appears that everyone is looking at or reading them, because look at how many people are leaving comments! Another case of the wrong denominator.

The maddening side of all this is that neither I nor most others can convince ourselves to ignore these worries, neuroses, narcissistic beliefs and poor assessments of risk — to ignore our wrong thoughts — precisely because the Internet has not changed the way we think.

Technology Forecaster; Consulting Associate Professor, Stanford University


Back in the mid-1700s, Samuel Johnson observed that there were two kinds of knowledge: that which you know, and that which you know where to get. It was a moment when cheap and abundant print coupled with reliable postal networks triggered an information explosion that dramatically changed the way people thought. Johnson's insight was crucial because until then scholars relied heavily on the first kind of knowledge, the ability to know and recall scarce information. Abundant print usurped this task and in the process created the need for a new skill — Johnson's knowing "where to get it."

Print offloaded knowing from memory to paper and in the process triggered a revolution focused on making knowledge easier to get. Johnson's great Dictionary of the English Language — the first modern dictionary — was an exemplar of this effort, followed in the next century by innovations from Roget's thesaurus, to catalogs, index cards and file cabinets. As the store of paper-based knowledge grew, the new skill of research displaced the old skill of recall. A scholar could no longer get by on memory alone — one had to know where and how to get knowledge.

Now the Internet is changing how we think again. Just as print took over the once-human task of knowing, cyberspace is assuming the task of knowing where to get what we seek. A single click now accomplishes what once required days in a research library. A well-phrased search query is vastly more effective than resort to a card catalogue, and one no longer needs to master a thesaurus just to find a synonym. Knowing where to get is now the domain of machines, not humans.

Make something easy to do and skills once reserved to elites will become tools of the masses. Electronic calculators were not mere slide rule substitutes; they made computation convenient and accessible to everyone. The Internet is changing our thinking by giving the tremendous power of search to the most casual of users. We have democratized knowledge-finding in the same way 18th century publishing democratized knowledge access.

Computers have become intellectual bulldozers for the curious, but the result falls short of the utopian knowledge future hoped for at the dawn of the Internet. Back in Johnson's time the public reveled in their newfound access, buying up books, consuming newspapers and sending endless streams of letters to friends. It must have been exhilarating, but much of it was to utterly no purpose. Now we revel in search, but most of what we search for isn't worth seeking, as the top search lists on Google, Yahoo and Bing make clear. Couch potatoes who once channel-surfed their way through TV's vast wasteland have morphed into mouse potatoes Google-surfing the vaster wasteland of Cyberspace. They are wasting their time more interactively, but they are still wasting their time.

The Internet has changed our thinking, but if it is to be a change for the better, we must add a third kind of knowledge to Johnson's list — the knowledge of what matters. Two centuries ago the explosion of print demanded a new discipline of knowing where to find knowledge. When looking up was hard, one's searches inevitably tended towards seeking only what really mattered. Now that finding is easy, the temptation to chase down info-fluff is as seductive as a 17th century Londoner happily wallowing in books with no purpose. Without a discipline of knowing what matters, we will merely amuse ourselves to death.

Knowing what matters is more than mere relevance. It is the skill of asking questions that have purpose, that lead to larger understandings. Formalizing this skill seems as strange to us today as a dictionary must have seemed in 1780, but I'll bet it emerges just as surely as print abundance led to whole new disciplines devoted to organizing information for easy access. The need to determine what matters will inspire new modes of cyber-discrimination and perhaps even a formal science of determining what matters. Social media hold great promise as discrimination tools, and AI hints at the possibility of cyber-Cicerones who would gently keep us on track as we traverse the vastness of cyberspace in our enquiries. Perhaps the 21st century equivalent of the Great Dictionary will be assembled by a wise machine that knows what matters most.

Science Writer; Founding chairman of the International Centre for Life; Author, Francis Crick: Discoverer of the Genetic Code.


The Internet is the ultimate mating ground for ideas, the supreme lekking arena for memes. Cultural and intellectual evolution depends on sex just as much as biological evolution does; otherwise it remains a merely vertical transmission system. Sex allows creatures to draw upon mutations that happen anywhere in their species. The Internet allows people to draw upon ideas that occur to anybody in the world. Radio and printing did this too, and so did writing, and before that language, but the Internet has made it fast and furious.

Exchange and specialization are what makes cultural evolution happen, and the Internet's capacity for encouraging exchange encourages specialization too. Somebody somewhere knows the answer to any question I care to ask, and it is much easier to find him or her. Often it is an amateur, outside journalism or academia, who just happens to have a piece of knowledge to hand. An example: suspicious of the claim that warm seas (as opposed to rapidly warming seas) would kill off coral reefs, I surfed the Net till I found the answer to the following question: is there any part of the oceans that is too hot for corals to grow? One answer lay in a blog comment from a diver just back from the Iranian side of the Persian Gulf where he had seen diverse and flourishing coral reefs in 35C water (ten degrees warmer than much of the Great Barrier Reef).

This has changed the way I think about human intelligence. I've never had much time for the academic obsession with intelligence. Highly intelligent people are sometimes remarkably stupid; stupid people sometimes make better leaders than clever ones. And so on. The reason, I realize, is that human intelligence is a collective phenomenon. If they exchange and specialize, a group of 50 dull-witted people can have a far higher collective intelligence than 50 brilliant people who don't. So that's why it is utterly irrelevant if one race turns out to have higher IQ than another, or one company hires people with higher IQs than another. I would rather be marooned on a desert island with a diverse group of mediocre people who know how to communicate, from a singer to a plumber, say, than with a bunch of geniuses.

The Internet is the latest and best expression of the collective nature of human intelligence.

Professor of Mathematical Physics, Tulane University; Coauthor, The Anthropic Cosmological Principle; Author, The Physics of Immortality


The Internet first appeared long after I had received my Ph.D. in physics, and I was slow to use it. I had been trained in physical library search techniques: look up the subject in Science Abstracts (a journal itself now made defunct by the Internet), then go to the archived full article in the physical journal shelved nearby. Now I simply search the topics in the Science Citation Index (SCI), and then go to the journal article available online. I no longer have to go to the library; I can access the SCI and the online journals via the Internet.

These Internet versions of journals and Abstracts have one disadvantage at present: my university can afford only a limited window for the search. I can use the SCI only back ten years, and most e-journals have not yet converted their older volumes to online format, or if they have, my university can often not afford to pay for access to these older print journals.

So the Internet causes scientific knowledge to become obsolete faster than was the case with the older print media. A scientist trained in the print media tradition is aware that there is knowledge stored in the print journals, but I wonder if the new generation of scientists, who grow up with the Internet, are aware of this. Also, print journals were forever. They may have merely gathered dust for decades, but they could still be read by any later generation. I can no longer read my own articles stored on the floppy discs of the 1980's. Computer technology has changed too much. Will information stored on the Internet become unreadable to later generations because of data storage changes, and the knowledge lost?

At the moment the data is accessible. More importantly, the raw experimental data is becoming available to theorists like myself via the Internet. It is well known from the history of science that experimentalists quite often do not appreciate the full significance of their own observations. "A new phenomenon is first seen by someone who did not discover it," is one way of expressing this fact. Now that the Internet allows the experimenter to post her data, we theorists can individually analyze it.

Let me give an example from my own work. Standard quantum mechanics asserts that an interference pattern of electrons passing through a double slit must have a certain distribution as the number of electrons approaches infinity. However, this same standard quantum mechanics does not give an exact description of the rate at which the final distribution will be approached. Many-Worlds quantum mechanics, in contrast, gives us a precise formula for this rate of approach, since according to Many-Worlds quantum mechanics, physical reality is not probabilistic at all, but more deterministic than the universe of classical mechanics. (According to Many-Worlds quantum mechanics, the wave function measures the density of Worlds in the Multiverse rather than a probability.)

Experimenters — indeed, undergraduate students in physics — have observed the approach to the final distribution, but they have never tried to compare their observations with any rate of approach formula, since according to standard quantum mechanics there is no rate of approach formula. Using the Internet, I was able to find raw data on electron interference that I used to test the Many-Worlds formula. Most theorists can tell a similar story.

But I sometimes wonder if later generations of theorists will be able to tell a similar story. Discoveries can be made by analyzing raw data posted online today, but will this always be true? The great physicist Richard Feynman often claimed: "there will be no more great physicists." Feynman believed that great physicists where those scientists who looked at reality from a different point of view than other scientists. Feynman argued in Surely You're Joking Mr. Feynman that all of his own achievements were due, not to his higher-than-other-physicists I.Q., but to his having a 'different bag of tricks." Feynman thought the future generations of physicists would all have the same "bag of tricks," and consequently be unable to move beyond the consensus view. Everyone would think the same way.

The Internet is currently the great leveler: it allows everyone to have access to exactly the same information. Will this ultimately destroy diversity of thought? Or will the tendency of people to form isolated groups on the Internet preserve that all important diversity of thought, so that although scientists all have equal access in principle, there are still those who look at the raw data in a different way from the consensus?

Associate Professor of Psychology and Neuroscience; Stanford University


Like it or not, I have to admit that the Internet has changed both what and how I think.

Consider the obvious yet still remarkable fact that I spend at least 50% of my waking hours on the Internet, compared to 0% of my time 25 years ago. In terms of what I think, almost all of my information (e.g., news, background checks, product pricing and reviews, reference material, general "reality" testing, etc.) now comes from the web. Although I work at a research institution, my students often look genuinely pained if I ask them to physically go to the library to check a reference, or (god forbid!) dig up something that is not online. In fact, I felt the same pain just recently when I had to traipse to the medical library (for the first time in three years) to locate some untranslated turn-of-the-century psychology by Wilhelm Wundt. Given the ubiquity and availability of Web content, how could one resist its influence? Although this content probably gets watered down as a function of distance from the source, consensual validation might offset the degradation. Plus, the Internet makes it easier to poll the opinions of trusted experts. So overall, the convenience and breadth of information on the Internet probably helps more than hurts me.

In terms of how I think, I fear that the Internet is less helpful. Although I can find information faster, that information is not always the most relevant, and is often tangential. More often than I'd like to admit, I sit down to do something and then get up bleary-eyed hours later, only to realize my task remains undone (or I can't even remember the starting point). The sensation is not unlike walking into a room, stopping, and asking "now, what was I here for?" — except that you've just wandered through a mansion and can't even remember what the entrance looked like.

This frightening "face-sucking" potential of the Web reminds me of conflicts between present and future selves first noted by ancient Greeks and Buddhists, and poignantly elaborated by philosopher Derek Parfit. Counterintuitively, Parfit considers present and future selves as different people. By implication, with respect to the present self, the future self deserves no more special treatment than anyone else.

Thus, if the present self doesn't feel a connection with the future self, then why forego present gratification for someone else's future kicks? Even assuming that the present self does feel connected to the future self, the only way to sacrifice something good now (e.g., reading celebrity gossip) for something better later (e.g., finishing that term paper) is to slow down enough to appreciate that connection, consider the conflict between present and future rewards, weigh the options, and decide in favor of the best overall course of action. The very speed of the Internet and convenience of Web content accelerates information search to a rate that crowds out reflection, which may bias me towards gratifying the salient but fleeting desires of my present self. Small biases, repeated over time, can have large consequences. For instance, those who report feeling less connected to their future self also have less in their bank accounts.

I suspect I am not the sole victim of Internet-induced "present self bias." Indeed, Web-based future self prostheses have begun to emerge, including software that tracks time off task and intervenes (ranging from reminders to blocking access to shutting programs down). Watching my own and others' present versus future self struggles, I worry that the Internet may impose a "survival of the focused," in which individuals gifted with some natural capacity to stay on target or who are hopped up on enough stimulants forge ahead, while the rest of us flail helplessly in some web-based attentional vortex. All of this makes me wonder whether I can trust my selves on the Internet. Or do I need to take more draconian measures — for instance, leave my computer at home, chain myself to a coffeehouse table, and draft longhand? At least in the case of this confessional, the future self's forceful but unsubtle tactics prevailed.

| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |

next >