| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |

next >

Physicist, MIT; Researcher, Precision Cosmology; Scientific Director, Foundational Questions Institute


I have a love-hate relationship with the Internet. With procrastination just a click away, and a seductive Siren song in the form of new-mail pings, I find it challenging to stay focused on a single subject long enough to have real impact. Maintaining the Zen-like focus that is so crucial for doing science was easier back when the newspaper and the mail came only once per day. Indeed, as a part of an abstinence-based rehab program, I now try to disconnect completely from the Internet while thinking, closing my mail program and Web browser for hours,  much to the chagrin of colleagues and friends who expect instant response. To get fresh and original ideas, I typically need to go even further, and completely turn off my computer.

On the other hand, the Internet gives me more time for such Internet-free thinking by eliminating second millennium style visits to libraries and stores. The Internet also lets me focus my thinking on the research frontier rather than on reinventing the wheel. Had the Internet existed in 1922 when Alexander Friedmann discovered the expanding universe model, Georges Lemaître wouldn't have had to rediscover it five years later.

The Internet gives me not only traditionally available information faster (and sometimes faster than I can retrieve it from memory), but also previously unavailable information. With some notable exceptions, I find that "the truth, nothing but the truth, but maybe not the whole truth" provides a useful rule of thumb for news reporting, and I usually find it both easy and amusing to piece together what actually happened by pretending that I just arrived from Mars, and comparing a spectrum of Web sites from Fox News to Al Jazeera.

The Internet also affects my thinking by leaving me thinking about the Internet. What will it do to us? On the flip side, as the master of distraction, it seems to be further reducing our collective attention span from the depths to which television had brought it. Important issues fade from focus fast, and while many of humanity's challenges get more complicated, society's ability to pay attention to complex arguments dwindles. Sound bites and attack ads work well when the world has attention deficit disorder.

On the other hand, the ubiquity of information is clearly having positive impact in areas ranging from science and education to economic development. I think the essence of science is to think for oneself and question authority. I therefore delight in the fact that the Internet makes it harder to restrict information and block the truth. Once the cat is out of the bag and in the cloud, that's it. Today it's hard even for Iran and China to prevent information dissemination. Soviet-style restrictions on copying machines sound quaint today, and the only currently reliable censorship is not to allow the Internet at all, like in North Korea.

Love it or hate it, but free information will transform the world. Oft-discussed examples range from third world education to terrorist technology. As another example, suppose someone discovers and posts online a safe low-tech chemical process for mass-producing all-synthetic cocaine, THC or heroin from cheap and readily available chemicals, much like methamphetamine manufacturing today except safer and cheaper. This would trigger domestic drug production in industrialized countries that no government could stop, in turn slashing prices and potentially devastating both the revenue and the power of Colombian and Mexican drug cartels as well as the Taliban.

Psychologist & Computer Scientist; Engines for Education Inc.; Author, Making Minds Less Well Educated Than Our Own


The Internet has not changed the way I think nor has it changed the way anyone else thinks. Thinking has always been the same. To simplify: the thinking process starts with an expectation or hypothesis; thinking requires one to find (or make up) evidence that explains where that expectation went wrong; and thinking involves deciding upon explanations of one's initial misunderstanding. Thinking is about attempting to understand how an aspect of the world works, and the process hasn't changed since caveman times. The important questions in this process are these: What constitutes evidence? How do you find it? How do you know if what you found is true? We construct explanations based on the evidence we have found.

This process was in place long before the Internet existed. Thinking hasn't changed. What has changed is how we find evidence, how we interpret the evidence we have found, and how we find available explanations from which to choose.

I went into AI to deal with exactly this issue. I was irritated that people would argue about what was true. They would get into fights about Babe Ruth's lifetime batting average. That doesn't happen much any more. Someone can quickly find it. Argument over.

Finding evidence and interpreting evidence has not, unfortunately, changed that much either. At first glance, we might think that the Internet has radically changed the way look for and accept evidence. And, I am sure this is true for the intellectuals who write Edge response essays. I am able to find evidence more quickly, to find explanations that others have offered more easily. I can think about a complex issue with more information and with the help of others who have thought about that issue before. Of course, I could always do this in a University environment, but now I can do it while sitting at home, and I can do it more quickly. This is nice, but less important than people realize.

Throughout human history, evidence to help thinking has been gathered by consulting others, typically the village elder who might very well have gotten his knowledge by talking to a puff of smoke. Today, people make decisions based on evidence that they get from the Internet all right, but that evidence often is no better than the evidence the village elder may have supplied. In fact, that evidence may well have been posted by the modern day version of the village elder.

The intelligentsia may well be getting smarter because they have easy access to a wider range of good thinking, but the rest of the world may easily be getting dumber because they have easy access to nonsense.

I don't believe the Internet has changed the way I or anyone else thinks. It has changed the arbiters of truth however. Now everyone is an expert.

Archaeologist, University of Bradford; Author, The Buried Soul


The first bit is wholly unsurprising: the Internet was designed for people like me, by people like me, most of them English speakers. Fundamentally reflecting western, rationalist, objective, data-organizing drives, the Internet simply enhances my ability to think in familiar ways, letting me work longer, more often, with better focus, free from the social tyranny of the library and the uncertainty of postmen. The Internet has changed what I think, however — most notably about where the human race is now headed. From a prehistorian's perspective, I judge that we have been returned to a point last occupied at the time of our evolutionary origin. This is what I mean:

When the first stone tool was chipped, over two millon years ago, it signalled a new way of being. The ancestral community learned to make flint axes, and those first artificial objects, in turn, critically framed a shared, reflective consciousness that began to express itself in language. An axe could be both made and said, used and asked for. The invention of technology brought the earliest unitary template for human thought into being. It can even be argued that it essentially created us as characteristically human.

What happened next is well known: technology accelerated adaptation. The original ancestral human culture spread out across continents and morphed into cultures, plural — myriad ways of being. While isolated groups drifted into ever greater idiosyncracy, those who found themselves in competition for the same resources consciously strove to differentiate themselves from their neighbours. This ever deepening cultural specificity facilited the dehumanization of enemies that successful warfare, driven by jealously guarded technological innovation, required.

Then reunification began, starting five thousand years ago, with the development of writing — a technology that allowed the transcription of difference. War was not over, but alien thoughts did begin to be translated, at first very approximately, across the boundaries of local incomprehension. The mature Internet marks the completion of this process, and thus the reemergence of a fully contiguous human cultural landscape. We now have the same capacity for being united under a common language and shared technology that our earliest human ancestors had.

So, in a crucial sense, we are back at the beginning, returned into the presence of a shared template for human thought. From now on, there are vanishingly few excuses for remaining ignorant of objective scientific facts, and ever thinner grounds for cultivating hatred through willful failure to recognize our shared humanity. Respecting difference has its limits, however: the fact of our knowing that there is a humanity to share means we must increasingly work towards agreeing common moral standards. The Internet means that there is nowhere to hide and no way to shirk responsibility when the whole tribe makes informed decisions (as it now must) about its shared future.

Physicist, Director, MIT's Center for Bits and Atoms; Author, FAB


The Internet is many things: good and bad (and worse) business models, techno-libertarian governance and state censors, information and misinformation, empowerment and addiction. But at heart it is the machine with the most parts ever created. What I've learned from the Internet comes not from Web 2.0 or anything else.0, it's the original insights from the pioneers that made its spectacular growth possible.

One is interoperability. While this sounds like technological motherhood and apple pie, it means that the Internet protocols are not the best choice for any particular purpose. They are, however, just good enough for most of them, and by sacrificing optimality the result has been a world of unplanned synergies.

A second is scalability. The Internet protocols don't contain performance numbers that impose assumptions about how they will be used, which has allowed their performance to be scaled over 6 orders of magnitude, far beyond anything initially anticipated. The only real exception to this was the address size, which is the one thing that's needed to be fixed.

Third is the end-to-end principle: the functions of the Internet are defined by what is connected to it, not by how it is constructed. New applications can be created without requiring anyone's approval, and can be implemented where information is created and consumed rather than centrally controlled.

And a fourth is open standards. The Internet's standards were a way to create playing fields, not score goals; from VHS vs Betamax to HD-DVD vs Blu-Ray, the only thing that's changed in standards wars has been who's sitting on which side of the table.

These simple-sounding ideas matter more than ever, because the Internet is now needed more than ever, but in places its never been. 3/4 of electricity is used by building infrastructure, which waste about a third of that, yet many of the attempts to make it intelligent hark back to the world of central office switches and dumb telephones. Some of the poorest people on the planet are "served" by some of the greediest telcos, while it's now possible to build communications infrastructure from the bottom up rather than the top down. In these and many more areas, four decades of Internet development are colliding with practices brought to us by (presumably) well-meaning but ill-informed engineers who don't study history as part of an engineering education, and thereby doom everyone else to repeat it. I'd argue that we already know the most important lessons of the Internet; what matters now is not finding them, but making sure we don't need to keep re-finding them.

Daniel L. everett
Chair of Languages, Literatures, & Cultures, Professor of Linguistics and Anthropology, Illinois State University; Author, Don't Sleep, There Are Snakes


I cannot use the Internet without thinking about the primitive research conditions I labored under during the late 1970s and early 1980s in the Brazilian Amazon, when I spent months at a time in complete isolation with the Pirahã people. My only connection with the wider world was a large and clunky Philips short-wave radio I bought in São Paulo. In the darkness of many Amazonian nights, I turned the volume low and listened, when all the Pirahãs and my family were asleep, to music shows like 'Rock Salad', to individual artists such as Joan Baez and Bob Dylan, and to news events like the Soviet invasion of Afghanistan and the election of Ronald Reagan. As much as I enjoyed my radio, though, I wanted to do more than just listen passively. I wanted to talk! I would lie awake after discovering some difficult grammatical or cultural fact and feel lost at times. I could barely wait to ask people questions about the data I was collecting in the village and my ideas about them. I couldn't, though. Too isolated. So I put thoughts of collaboration and consultation out of my head. Now this wasn't a completely horrible outcome. Isolation taught me to think independently. But there were times when I would have liked to have had a helping hand.

All that changed in 1999. I purchased a satellite phone with Internet capability. I could email from the Amazon! (And the US taxpayer would even foot the bill — I added the costs of connection time to my National Science Foundation budgets.)

Now I could read an article or a book in the Pirahã village and immediately contact the author. I learned that if you begin your email with, "Hi, I am writing to you from the banks of the Maici river in the Amazon jungle" you almost always get a response. I would send out half-baked ideas to colleagues and people I didn't even know around the world and get responses back quickly — sometimes while I was floating down the Maici river in my boat, drinking a beer, and relaxing from the demands of being the main entertainment for a village of practical-joking Pirahãs. After reading these responses I would discard some of my ideas, further develop others, and, most importantly, get brand new ones. I could not have telephoned all of my interlocutors. Most were too busy to take random phone calls from conversation-hungry Amazonianists. And I didn't know most of them all that well. Sending a regular letter was not possible from the Pirahã village. My thinking about language and culture were altered profoundly by access to fresh intellectual energy.

In the city from where I now do most of my work, the Internet has become an extension of my memory — it combats the occasional "senior moment", helping me to find names, facts, and places instantly (or so it seems). It gives me a second, bigger brain. The Internet has allowed me to learn from people I have never met. It placed me in a university that profoundly affected my career, my research, and my worldview.

I rarely connect to the Internet from the Amazon these days. I am not there as long or as frequently as in the past and so most of the time, I simply want to enjoy being with the people I am visiting. I have learned that the Internet is just a tool. It doesn't fit every job. I avoid using the Internet for tasks that require a more personal connection, such as administering my university department or talking to my children. But if it is just a tool, it is a wondrous tool. It changed my thinking (and my approach to thinking) like the first chainsaw must have affected loggers. The Internet gave me access to as much information (for good or ill) as any researcher in the world, even from the rain forest.


Marc D. Hauser
Psychologist and Biologist, Harvard University: Author, Moral Minds


Let me answer this question by recounting a personal story that took place 25 years ago in Kenya.

I was in Amboseli, National Park, Kenya to complete my PhD thesis on the development of vervet monkey behavior. I had never travelled to Africa. Kenya was my first exposure to the continent. I gradually learned Kiswahili, the local language. I learned it while playing on the local soccer team. I also learned another custom, one that started out as a shock to my male-ness, but soon became a lovely manner of interaction: holding hands while talking to good male friends. When I returned to the United States, and reached out to hold the hand a good buddy, I received a dirty look, followed by some lovely explicatives. I tried to explain that it was a way of connecting, and was not what he thought. Physical contact is good for us. I tucked this story away for years. It was resuscitated in Australia.

When we contact another human being? holding hands, touching a cheek? we are doing something that is evolutionarily ancient. Our primate ancestors did it all the time, and do it today: they groom. Yes, grooming removes bugs, but it has a massive social effect. It jazzes up the feel-good chemistry of the brain, the endorphins. Travel to a hunter-gather society, or watch National Geographic, and you will witness people in contact. To contact is to connect.

Today, most of our connections are through the Internet. The closest haptic experience we have is with our keyboards or the magical glass of an iPhone. We Twitter, Facebook, Chat, IM, Google-Talk, and Skype. And there is even chatiquette to make sure we do it with, you know, appropriate decorum! As remarkable as these technologies are, and as wonderful as they are in enabling us to stay in touch with friends and family that live in other countries or even other states, they have caused a fundamental decline in our capacity for normal, face to face. They have, in a word, enabled us to be mindblind, insensitive to others' body language, to the way they hold themselves, and express feelings in an eyebrow or curled nose. Our capacity to connect through the Internet may be breeding a generation of social degenerates.

And online chatting is only one source of disconnect, of breaking the human physical bond. We now kill without seeing our enemies, running the show, as first witnessed in Desert Storm, by remote control, coordinated by private Internet links. The days of looking your enemy in the eye, and driving a knife into his body, are over! So too are we witnessing the decline of the hands' on doctor, the medical man of compassion. Surgeries are being handed over to robots. Of course, doctors control them today. But they no longer have to touch the patient. In fact, because of the Internet, a gifted surgeon in Boston can guide a beginner in Bangkok, without even meeting the patient, let alone touching his body.

Lest I be misunderstood, I do not have Webophobia, greatly profit from the Internet as a consummate informavore, and am a passionate one-click Amazonian. But our capacity to connect is causing a disconnect. Perhaps Web 3.0 will enable a function to virtually hold hands with our twitter friends.


Author, Does IT Matter?; The Big Switch


As the school year began last September, Cushing Academy, an elite Massachusetts prep school that's been around since Civil War days, announced that it was emptying its library of books. In place of the thousands of volumes that had once crowded the building's shelves, the school was installing, it said, "state-of-the-art computers with high-definition screens for research and reading" as well as "monitors that provide students with real-time interactive data and news feeds from around the world." Cushing's bookless library would become, boasted headmaster James Tracy, "a model for the 21st-century school."

The story gained little traction in the press — it came and went as quickly as a tweet—but to me it felt like a cultural milestone. A library without books would have seemed unthinkable just twenty years ago. Today, the news almost seems overdue. I've made scores of visits to libraries over the last couple of years. Every time, I've seen more people peering into computer screens than thumbing through pages. The primary role played by libraries today seems to have already shifted from providing access to printed works to providing access to the Internet. There's every reason to believe that trend will only accelerate.

"When I look at books, I see an outdated technology," Mr. Tracy told a reporter from the Boston Globe. His charges would seem to agree. A 16-year-old student at the school took the disappearance of the library books in stride. "When you hear the word 'library,' you think of books," she said. "But very few students actually read them."

What makes it easy for an educational institution like Cushing to jettison its books is the assumption that the words in books are the same whether they're printed on paper or formed of pixels or E Ink on a screen. A word is a word is a word. "If I look outside my window and I see my student reading Chaucer under a tree," said Mr. Tracy, giving voice to this common view, "it is utterly immaterial to me whether they're doing so by way of a Kindle or by way of a paperback." The medium, in other words, doesn't matter.

But Mr. Tracy is wrong. The medium does matter. It matters greatly. The experience of reading words on a networked computer, whether it's a PC, an iPhone, or a Kindle, is very different from the experience of reading those same words in a book. As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It's designed to scatter our attention. It doesn't shield us from environmental distractions; it adds to them. The words on a computer screen exist in a welter of contending stimuli.

The human brain, science tells us, adapts readily to its environment. The adaptation occurs at a deep biological level, in the way our nerve cells, or neurons, connect. The technologies we think with, including the media we use to gather, store, and share information, are critical elements of our intellectual environment and they play important roles in shaping our modes of thought. That fact has not only been proven in the laboratory; it's evident from even a cursory glance at the course of intellectual history. It may be immaterial to Mr. Tracy whether a student reads from a book or a screen, but it is not immaterial to that student's mind.

My own reading and thinking habits have shifted dramatically since I first logged onto the Web fifteen or so years ago. I now do the bulk of my reading and researching online. And my brain has changed as a result. Even as I've become more adept at navigating the rapids of the Net, I have experienced a steady decay in my ability to sustain my attention. As I explained in 2008, "what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles." Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it's hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.

There are as many human brains as there are human beings. I expect, therefore, that reactions to the Net's influence, and hence to this year's Edge question, will span many points of view. Some people will find in the busy interactivity of the networked screen an intellectual environment ideally suited to their mental proclivities. Others will see a catastrophic erosion in the ability of human beings to engage in calmer, more meditative modes of thought. A great many will likely be somewhere between the extremes, thankful for the Net's riches but worried about its long-term effects on the depth of individual intellect and collective culture.

My own experience leads me to believe that what we stand to lose will be at least as great as what we stand to gain. I feel sorry for the kids at Cushing Academy.

Computer Scientist, Yale University; Chief Scientist, Mirror Worlds Technologies; Author, Mirror Worlds


The Internet is virtualizing the universe, which changes the way I act and think. "Virtualization" (a basic historical transition, like "industrialization") means that I spend more & more of my time acting-within and thinking about the mirror-reflection of some external system or institution in the (smooth, pond-like) surface of the Internet. But the continuum of the Cybersphere will emerge from today's bumpy cob-Web when Virtualization reaches the point at which the Internet develops its own emergent properties and systems: when we stop looking at the pixels (the many separate sites and services that make up the Web) and look at the picture. (It's the picture, not the pixels! Eventually top-down thinking will replace bottom-up engineering in the software world—which will entail roughly a 99.9% turnover in the current population of technologists.)

Conversation spaces, for example, will be simple emergent systems in the Cybersphere, where I talk and listen (or read and write) in a space containing people with whom I like to converse, with no preliminary set-up (so long as there's a computer nearby), as if I were in a room with friends. If I want someone's attention I say his name or look at him; if I speak a little louder, I'm seeking a general discussion. If I say "Let's talk about Jasper Johns," the appropriate group of people materializes. If one of them is busy, I can speak now & he can speak back to me later, & I can respond later still. (Some people claim to be good at multi-tasking; we'll see how many slow-motion conversations they can keep going simultaneously.)

Today there are many universities & courses online; eventually, as Virtualization progresses, we'll see many or most absorbed into a world-university where you can walk the halls, read the bulletin boards & peek into classrooms within a unified space — without caring which conventional university or Web site contributed what. We'll see new types of institutions and objects emerge, too; virtual objects and institutions will absorb their own histories (like cloth absorbing the fragrance of flowers), so I can visit Virtual Manhattan now or roll it backwards in time; a large subset of all the knowledge that exists about (say) Wells Cathedral is absorbed into the virtual or emergent Wells Cathedral. At Virtual Wells, I can dive deeper for detail about any aspect of the place, or roll the building (& its associated ideas and institutions) backwards in time until they vanish "into the mists of history"; or, for that matter, tentatively push it Virtual Wells forward in time (which is not so easy — like pushing something uphill), & see what can be calculated, forecast or guessed about the cathedral's future a day, a week or a thousand years from now.

Virtualization has the important intellectual side-effect of leading us towards a better understanding of the relation between emergent properties & virtual machines or systems. Thus "I" am an emergent property of my body & mind; "I" (my subjective experience of the world & myself) am a virtual machine, of sorts; but "I" (or "consciousness") am just as real (despite being virtual) as the pull-down menu built of software — or the picture that emerges from the pixels. Like industrialization, virtualization is an intellectual as well as a technological & economic transition; like industrialization, it's a change in the texture of time.

Panasonic Professor of Robotics, MIT Computer Science and Artificial Intelligence Lab; Author, Flesh and Machines


When a companion heads to the bathroom during dinner I surreptitiously pull out my iPhone to check my email and for incoming SMS. When I am writing computer code I have my email inbox visible at the corner so that I can see if new messages arrive — even though I know that most that do arrive will be junk that has escaped my spam filters. When I am writing a paper, or letter, or anything else serious I flip back and forth scanning my favorite news sites for new gems, or during weekdays I check on stock prices — they might be different than they were five minutes ago.

I recently realized why I enjoy doing a mindless but timed Sudoku puzzle so much — the clock stops me from breaking off to go graze on the endless variety of intellectual stimulations that the Web can bring to me. Tragically Sodoku is my one refuge from information provoked attention deficit disorder.

The Internet is stealing our attention. It competes for it with everything else we do. A lot of what it offers is high quality competition. But unfortunately a lot of what it offers is merely good at capturing our attention, and provides us with little of long term import — sugar filled carbonated sodas for our mind.

We, or at least I, need tools that will provide us with the Diet-Internet, the version that gives us the intellectual caffeine that lets us achieve what we aspire too, but which doesn't turn us into hyper-active intellectual junkies.

Recently, as reported in Nature, an open group of people interested in mathematics (including some of the best currently active mathematicians in the world) used wikis and blogs to come up with a new and elegant proof of the Hales-Jewett theorem in 37 days. The Internet provided a new forum for geographically disparate people to collaborate and to contribute new insights, each small and incremental, enabling a result that at best might have taken the brightest of them many months or years to achieve individually.

We can now find just about any scientific paper we want online — I've found some old ones of mine that I had no idea were digitized — I was a smart young thing once, I must say. Soon just about everything ever written or recorded will be available in some form on the Internet, immediately.

The two promises, ease of collaboration and instant access to any and all information do indeed change the way we work. Just as arabic numerals empowered our computation abilities, and just as mass produced books empowered many more people to have a reference library, and just as the tape recorder and camera empowered us to record data better for careful analysis, and just as calculators and computers empowered us to simulate physical systems without a direct physical analog, the Internet has empowered us to do new and grander things and more quickly than were previously possible.

But there are kinks yet to be worked out, beyond the theft of our attention. There is stability of pointers (on our desktop machines our files move around on the disk, but the pointers to them automagically update to the new location), there is stability of format (so that old movies or documents are still readable), there is the issue of being able to aggregate digital media into manipulable containers (I used to use cardboard portfolio file cases to organize multiple media for each of my current projects), and then there is that pesky problem of business models, so that people have a way of getting paid for things that they do which we all use.

We're still in the middle of it. We operate in new ways, but those ways have not yet stabilized. Ultimately they will, at least for some of us. I'm hoping that I will find my way into that group!

Psychologist, Yale University; Author, Descartes' Baby


When I was a boy, I loved the science-fiction idea of a machine that could answer any factual question. It might be a friendly robot, or a metal box you keep in your house, or one of the components of a starship. You just ask "Computer: How far away is Mars?" or "Computer: List the American presidents in order of height," and a toneless voice would immediately respond.

I own several such machines right now, including an iPhone that fits in my pocket, all of which access information on the Internet. (Disappointingly, I can't actually talk to any of them — the science-fiction writers were optimistic in this regard.) But the big surprise is that much of this information is not compiled by corporations, governments, or universities. It comes from volunteers. Wikipedia is the best-known example, with millions of articles created by millions of volunteer editors, but there are also popular sites such as amazon.com and tripadvisor.com which contain countless unpaid and anonymous reviews.

People have wondered whether this information is accurate (answer: mostly yes), but I'm more interested in its very existence. I am not surprised by the scammers, the self-promoters and the haters. But why do people devote their time and energy to anonymously donating accurate and useful information? We don't put twenty dollars bills in strangers' mailboxes; why are we giving them our time and expertise? Comments on blogs pose a similar puzzle, something nicely summarized in the classic xkcd cartoon where someone is typing frantically on the computer; when asked to come to bed, the person says, "I can't. This is important ... Someone is wrong on the Internet."

Apparently the Internet evokes the same social impulses that arise in face-to-face interactions. If someone is lost and asks you for directions, you are unlikely to refuse or to lie. It is natural, in most real-world social contexts, to offer an opinion about a book or movie you like; or to speak up when the topic is something you know a lot about. The proffering of information on the Internet is the extension of this everyday altruism. It illustrates the extent of human generosity in our everyday lives and also shows how technology can enhance and expand this positive human trait, with real beneficial results. People have long said that the Web makes us smarter; it might make us nicer as well.

| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |

next >