| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |

index >

Film-Maker; Founder, free-form.tv; Lybba.org


The promise of the web when it was first kicked around at CERN and DARPA was to create a decentralized exchange of information. I think the grand power of that idea is that insight can come from literally anywhere. People with differing ideas and backgrounds can test their theories against the world, and at the end of it all: may the best idea win. That's powerful. The fact that the information can be looked at by so many different kinds of people from anywhere on Earth is the Internet's true power, and it's the source of my fascination with it. Right now a little kid can browse the raw data coming from the Large Hadron Particle Collider, he can search the stars for signals of alien life with the SETI project. Anyone can discover the next world-changing breakthrough. That's the point of the Internet.

Also, I think the contribution of search engines in simplifying the research process can't be under estimated. It gives me, and everybody else, the ability to conduct research instantly on our own terms. It's a tremendous leap from what I had to do 10 years ago to find anything out, from knowing who my interview subjects are to where I can get the best BLT in Hollywood, and still, I think the web is in it's infancy. The great hubs of information we've constructed, and the tools to traverse them, like Google, Wikipedia, and Facebook, are only going to get deeper and more resonant as we learn how to communicate over them more effectively. When our collective sources of knowledge improve, we will be better for it and our lives will be more meaningful. Just think about what we can do when these tools are applied to the world's of medicine, science, and art. I can't wait to see what a world full of instant knowledge and open inquiry will bring.

Today, the Internet permeates pretty much all of my thoughts and actions. I access it with my phone, my computer, at home, at work. It gives me untold quantities of new knowledge, inspiration, the ability to connect. I interact with people all over the world from different fields and walks of life, and I see myself and others becoming interconnected hubs of information that the full range of human experience passes through. With the Internet, I feel like I am never truly alone, with the very ends of the Earth a few clicks away.

I was talking with George Whiteshead not long ago about the way to approach innovation. Almost as an aside he said that the only way to make advances was to have five different strategies in the hopes that one would work out. Well the Internet is a place where I can pick from the sum of all strategies people have tried out beforehand, and if I think of something new, I can put it up there to share with the world.

I was at the Mayo clinic doing a film project on a rare condition called NMO. I heard the story about how the diagnostic test for this condition was discovered by accident. An MS doctor was speaking at a symposium and a Cancer researcher heard his results. This moment, by accident, led to the creation of the test. To me that's not an accident at all. It happened because someone, maybe the Mayo brothers themselves, put in place a system - making the symposium an event that disparate researchers and physicians would attend. The insight came because the platform made it possible for these people and ideas to come together and that made possible a better level of understanding, and so on and so forth.

When I was a child I learned from looking at the world and reading books. The knowledge I craved was hidden away. Much was secret and unavailable. In my youth, you had to dig deep and explore to find what you were looking for, and often what you wanted was locked up and out of reach. To get from Jack Kerouac to Hank Williams to the pentatonic scale used to be quite a journey. Now, it can happen in an instant. Some people would say that the old way was good thing, I disagree.

Richard Clarke Cabot Professor of Social Ethics, Department of Psychology, Harvard University


My first encounter with the information highway came in the form of a love letter in 1982. My boyfriend had studied artificial intelligence at Carnegie Mellon in the mid-1970s, and worked at an IBM lab on the east coast while I was in graduate school in the Midwest. He had pestered me to get an account on something called BITNET. After procrastinating, because I didn't see the point of it, there I was, connected to him without paying AT&T a penny. So that's what the net was good for, I thought, and recommended it whole-heartedly to every couple struggling to manage a long distance relationship.

Almost 30 years later, I cannot say that the Internet has changed, even an iota, how I think. How I think is something I get from the millions of years of the evolution of my species. The way I think is something I get from the remarkable "thinkers" in my environment. But what the Internet has surely done is to change what I think about, what I know, and what I do. It has done so in stupendous ways, and I mention the single most significant of such encounters.

In the mid-1990s, I came to work on a method for gaining access to the way in which the mind works automatically, unreflectively, less consciously. My students and I studied how thoughts and feelings about social groups (race, gender, class, age, etc.) that we might consider unacceptable nevertheless came to have a presence in our minds. This situation, we recognized, didn't result from any simple obtuseness on the part of human beings themselves; it was the mind's nature that made it so, that blocked access. Remarkably, I could test myself and I learned that my own mind contained thoughts and feelings of which I was not aware; that those thoughts and feelings weren't ones I wanted to possess or was proud of; yet, much as I might deny them, they were a part of who I was.

In 1998, my collaborator Tony Greenwald and I decided that it was time to develop a version of the test, called the Implicit Association Test or IAT, for the web. There were no models for doing this, there were no such experiments by behavioral scientists at the time. But we had talent and grit in the person of Brian Nosek (a graduate student at Yale the time) a visionary in Phil Long (Yale's main IT overseer), and a scrupulous and effective Internal Review Board that worked through the ethical details of such a presence on the web.

We went live on September 29th, 1998, agreeing that our main purpose for placing the IAT on the Internet was not research as much as it was education. We believed that the method we had developed could provide a moment of self-reflection and learning. That if we did it right, we could engage thousands, even millions, in the task of asking where the stuff in their heads comes from, in what form it sits there, and what they may want to do about it if they themselves did not approve of it.

In the very first days, a large news network placed a link to our site, and there was no looking back. Hundreds of people visited, sampled the IAT, and fired off their responses at us. Interactions with them about technical issues but even more so about the reactions to the experience forced us to write new language and modify our own presentation. By the end of the first month, we were the stunned recipients of 40,000 completed IATs. We couldn't have learned what we did in that month in half a lifetime had we stayed with the traditional platform for research.

This primarily education site did change the research enterprise itself. A research question involving an alternative hypothesis posed on day 1 could be answered by day 2, because of the amount of data that flowed in daily. The very nature of research changed in the collaborations that mushroomed, in the diversity of the people who participated, and the sheer amount we were able to learn and know at high speed.

The Internet changed the quality of what we know and how confident we can be in our assessments of what we know. It changed our notion of what it means to be in constant public dialog about our science. It changed our relationships with our participants with whom there can be a real discussion, sometimes many months after an initial interaction. It also changed our relationship with the media itself, who themselves became research subjects before communicating about the work. Most surprising was the discovery that the vast majority of visitors were willing to entertain the notion that they may not know themselves. Without the Internet we might have believed that such was the limited privilege of the intellectual elite. Now we know better.

Of course, this science will always require other forms of gathering data besides the Internet. Of course, there are serious limits to what can be done to understand the human mind using the vehicle of the Internet. But it is safe to say that the Internet allowed us to perform the first large scale study of a aspect of social cognition. Today, we have more than 11 million pieces of IAT data from implicit.harvard.edu and its predecessor site. The topics cover what the site is best known for (automatic attitudes toward age, race/ethnicity, sexuality, skin color, religion; automatic stereotypes of foreignness, math/science, career-home), but also political attitudes in the last three presidential elections, and dozens of research projects on matters concerning health, mental health, consumer behavior, politics, medical practice, business practice, legal matters, and educational interests. Any person with access to the net and a desire to spend a few minutes locked in battle with the IAT is a potential participant in the project. Teachers and professors, corporations and non profits all over the world use the site for their own educational purposes.

The site yields 20,000 completed IATs per week, and involves specialized sites for 33 countries in 22 languages. There are no advertisements. Somehow, people find it, and stay. We think, for the simple reason that they want to understand themselves better.

Founder and CEO of O'Reilly Media, Inc.


Many years ago, I began my career in technology as a technical writer, landing my first job writing a computer manual on the same day that I saw my first computer. The one skill I had to rely on was one I had honed in my years as a reader, and in my university training in Greek and Latin classics: the ability to follow the breadcrumb trail of words back to their meaning.

Unfamiliar with the technology I was asked to document, I had to recognize landmarks and to connect the dots, to say "these things go together." I would read a specification written by an engineer, over and over, until I could read it like a map, and put the concepts in the right order, even if I didn't fully understand them yet. That understanding would only come when I followed the map to its destination.

Over the years, I honed this skill, and when I launched my publishing business, the skill that I developed as an editor was the skill of seeing patterns. "Something is missing here." "These two things are really the same thing seen from different points of view." "These steps are in the wrong order." "In order for x to make sense, you first have to understand y." Paula Ferguson, one of the editors I hired, once wrote that "all editing is pattern matching." You study a document, and you study what the document is talking about, and you work on the document until the map matches the territory.

In those early years of trying to understand the industry I'd been thrust into, I read voraciously, and it was precisely because I didn't understand everything that I read that I honed my ability to recognize patterns. I learned not as you are taught in school, with a curriculum and a syllabus, but with the explorations of a child, who composites a world-view bit by bit out of the stuff of everyday life.

When you learn in this way, you tell your own story and draw your own map. When my co-worker Dale Dougherty created GNN, the Global Network Navigator, the first commercial web portal, in 1993, he named it after The Navigator, a 19th-century handbook that documented the shifting sandbars of the Mississippi River.

Over the years, my company has been a map-maker in the world of technology, spotting trends, documenting them, and telling stories about where the sandbars lie, the portages that cut miles off the journey, as well as the romance of travel and the glories of the destination. In telling stories to explain what we've learned and encourage others to follow us into the West, we've become not just mapmakers but meme makers. Open Source, Web 2.0, the Maker movement, Government as a Platform are all stories we've had a role in telling.

It used to be the case that there was a canon, a body of knowledge shared by all educated men and women. Now, we need the skills of a scout, the ability to learn, to follow a trail, to make sense out of faint clues, and to recognize the way forward through confused thickets. We need a sense of direction that carries us onward through the wood despite our twists and turns. We need "soft eyes" that take in everything we see, not just what we are looking for.

The information river rushes by. Usenet, email, the world wide web, RSS, twitter: each generation carrying us faster than the one before.

But patterns remain. You can map a river as well as you can map a mountain or a wood. You just need to remember that the sandbars may have moved the next time you come by.

Physicist, MIT; Recipient, 2004 Nobel Prize in Physics; Author, The Lightness of Being


(Apology: The question "How has the Internet changed the way you think?" is a difficult one for me to answer in an interesting way; the truth is, I use the Internet as an appliance, and it hasn't profoundly changed the way I think, at least not yet. So I've taken the liberty of interpreting the question more broadly, in the form "How should the Internet, or its descendants, affect how people like me think?")

If controversies were to arise, there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in their hands, to sit down to the slates, and to say to each other (with a friend as witness, if they liked): "Let us calculate." — Leibniz (1685)

Clearly Leibniz was wrong here, for without disputation philosophers would cease to be philosophers. And it is difficult to see how any amount of calculation could settle, for example, the question of free will. But if we replace, in Leibniz' visionary program, "sculptors of material reality" for "philosophers", then we arrive at an accurate description of an awesome opportunity — and an unanswered challenge — that faces us today. This opportunity began to take shape roughly eighty years ago, as the equations of quantum theory reached maturity.

The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble. — P. A. M. Dirac (1929)

Much has happened in physics since Dirac's 1929 declaration. Physicists have found new equations that reach into the heart of atomic nuclei. High-energy accelerators have exposed new worlds of unexpected phenomena and tantalizing hints of Nature's ultimate beauty and symmetry. Thanks to that new fundamental understanding we understand how stars work, and how a profoundly simple but profoundly alien fireball evolved into universe we inhabit today. Yet Dirac's bold claim holds up; while the new developments provide reliable equations for smaller objects and more extreme conditions than we could handle before, they haven't changed the rules of the game for ordinary matter under ordinary conditions. On the contrary, the triumphant march of quantum theory far beyond its original borders strengthens our faith in its soundness.

What even Dirac probably did not foresee, and what transforms his philosophical reflection of 1929 into a call to arms today, is that the limitation of being "much too complicated to be soluble" could be challenged. With today's chips and architectures, we can start to solve the equations for chemistry and materials science. By orchestrating the power of billions of tomorrow's chips, linked through the Internet or its successors, we should be able to construct virtual laboratories of unprecedented flexibility and power.

Instead of mining for rare ingredients, refining, cooking, and trying various combinations scattershot, we will explore for useful materials more easily and systematically, by feeding multitudes of possibilities, each defined by a few lines of code, into a world-spanning grid of linked computers.

What might such a world-grid discover? Some not unrealistic possibilities: friendlier high-temperature superconductors, that would enable lossless power transmission, levitated supertrains, and computers that aren't limited by the heat they generate ; super-efficient photovoltaics and batteries, that would enable cheap capture and flexible use of solar energy, and wean us off carbon burning; super-strong materials, that could support elevators running directly from Earth to space.

The prospects we can presently foresee, exciting as they are, could be overmatched by discoveries not yet imagined. Beyond technological targets, we can aspire to a comprehensive survey of physical reality's potential. In 1964, Feynman posed this challenge:

Today, we cannot see whether Schrodinger's equation contains frogs, musical composers, or morality — or whether it does not. We cannot say whether something beyond it like God is needed, or not. And so we can all hold strong opinions either way. — R. P. Feynman (1964)

How far can we see today? Not all the way to frogs or to musical composers (at least not good ones), for sure. In fact only very recently did physicists succeed in solving the equations of quantum chromodynamics (QCD) to calculate a convincing proton, by using the fastest chips, big networks, and tricky algorithms. That might sound like a paltry beginning, but it's actually an encouraging show of strength, because the equations of QCD are much more complicated than the equations of quantum chemistry. And we've already been able to solve those more tractable equations well enough to guide several revolutions in the material foundations of microelectronics, laser technology, and magnetic imaging. But all these computational adventures, while impressive, are clearly warm-up exercises. To make a definitive leap into artificial reality, we'll need both more ingenuity and more computational power.

Fortunately, both could be at hand. The [email protected] project has enabled people around the world to donate their idle computer time to sift radio waves from space, advancing the search for extraterrestrial intelligence. In connection with the Large Hadron Collider (LHC) project, CERN laboratory — where, earlier, the World Wide Web was born — is pioneering the GRID computer project, a sort of Internet on steroids, that will allow many thousands of remote computers and their users to share data and allocate tasks dynamically, functioning in essence as one giant brain. Only thus can we cope — barely! — with the gush of information that collisions at the LHC will generate. Projects like these are the shape of things to come.

Chess by pure calculation in 1958, and rapidly became more capable, beating masters (1978), grandmasters (1988), and world champions (1997). In the later steps, a transition to "massively" parallel computers played a crucial role. Those special-purpose creations are mini-Internets (actually mini-GRIDs), networking dozens or a few hundred ordinary computers. It would be an instructive project, today, to set up a [email protected] network, or a GRID client, that could beat the best standalones. Players of this kind, once created, would scale up smoothly to overwhelming strength, simply by tapping into ever larger resources.

In the more difficult game of calculating quantum reality we, with the help of our silicon friends, presently play like weak masters. We know the rules, and make some good moves, but we often substitute guesswork for calculation, we miss inspired possibilities, and we take too long doing it. To do much better we'll need to make the dream of a world-GRID into a working reality. We'll need to find better ways of parceling out subtasks in ways that don't require intense communication, better ways of exploiting the locality of the underlying equations, and better ways of building in physical insight, to prune the solution space. These issues have not received the attention they deserve, in my opinion. Many people with the requisite training and talent feel it's worthier to discover new equations, however esoteric, than to solve equations we already have, however important their application.

People respond to the rush of competition and the joy of the hunt. Some well-designed prizes for milestone achievements in the simulation of matter could have a big impact, by focusing attention and a bit of glamour toward this tough but potentially glorious endeavor. How about, for example, a prize for calculating virtual water that boils at the right temperature?


As the Web becomes more comprehensive and searchable, it helps us see what's missing in the world. The emergence of more effective ways to detect the absence of a piece of knowledge is a subtle and slowly emerging contribution of the Web, yet important to the growth of human knowledge. I think we all use absence-detection when we try to squeeze information out of the Web. I think it's worth considering both how it works and how it could be be made more reliable and user-friendly.

The contributions of absence-detection to the growth of shared knowledge are relatively subtle. Absences themselves are invisible, and when they are recognized (often tentatively), they usually operate indirectly, by influencing the thinking of people who create and evaluate knowledge. Nonetheless, the potential benefits of better absence-detection can be measured on the same scale as the most important questions of our time, because improved absence-detection could help societies blunder toward somewhat better decisions about those questions.

Absence-detection boosts the growth of shared human knowledge in at least three ways:

Development of knowledge: Generally, for shared knowledge to grow, someone must invest effort to develop a novel idea into something more substantial (resulting in a blog post, a doctoral dissertation, or whatever). A potential knowledge-creator may need some degree of confidence that the expected result doesn't already exist. Better absence-detection can help build that confidence — or drop it to zero and abort a costly duplication.

Validation of knowledge: For shared knowledge to grow, something that looks like knowledge must gain enough credibility to be treated as knowledge. Some knowledge is born with credibility, inherited from a credible source, yet new knowledge, supported by evidence, can be discredited by arguments backed by nothing but noise. A crucial form of evidence for a proposition is sometimes the absence of credible evidence against it.

Destruction of anti-knowledge: Shared knowledge can also grow through removal of of anti-knowledge, for example, by discrediting false ideas that had displaced or discredited true ones. Mirroring validation, a crucial form of evidence against the credibility of a proposition is sometimes the absence of credible evidence for it.

Identifying what is absent by observation is inherently more difficult than identifying what is present, and conclusions about absences are usually substantially less certain. The very idea runs counter to the adage, being based on the principle that absence of evidence sometimes is evidence of absence. This can be obvious: What makes you think there's no elephant in your room? Of course, good intellectual housekeeping demands that reasoning of this sort be used with care. Perceptible evidence must be comprehensive enough that a particular absence, in a particular place, is significant: I'm not at all sure that there's no gnat in my room, and can't be entirely sure that there's no elephant in my neighbor's yard.

Reasonably reliable absence-detection through the Web requires both good search and dense information, and this is one reason why the Web becomes effective for the task only slowly, unevenly, and almost imperceptibly. Early on, an absence in the Web shows a gap in the Web; only later does an absence begin to suggest a gap in the world itself.

I think there's a better way to detect absences, one that bypasses ad hocsearch by creating a public place where knowledge comes into focus:

We could benefit immensely from a medium that is as good at representing factual controversies as Wikipedia is at representing factual consensus.

What I mean by this is a social software system and community much like Wikipedia — perhaps an organic offshoot — that would operate to draw forth and present what is, roughly speaking, the best evidence on each side of a factual controversy. To function well would require a core community that shares many of the Wikipedia norms, but would invite advocates to present a far-from-neutral point of view. In an effective system of this sort, competitive pressures would drive competent advocates to participate, and incentives and constraints inherent in the dynamics and structure of the medium would drive advocates to pit their best arguments head-to-head and point-by-point against the other side's best arguments. Ignoring or caricaturing opposing arguments simply wouldn't work, and unsupported arguments would become more recognizable.

Success in such an innovation would provide a single place to look for the best arguments that support a point in a debate, and with these, the best counter-arguments — a single place where the absence of a good argument would be good reason to think that none exists.

The most important debates could be expected to gain traction early. The science of climate change comes to mind, but there are many others. The benefits of more effective absence-detection could be immense and concrete.

Researcher, MIT Mind Machine Project


Filtering, not remembering, is the most important skill for those who use the Internet. The Internet immerses us in a milieu of information — not for almost 20 years has a Web user read every available page — and there's more each minute: Twitter alone processes hundreds of tweets every second, from all around the world, all visible for anyone, anywhere, who cares to see. Of course, the majority of this information is worthless to the majority of people. Yet anything we care to know — what's the function for opening files in Perl? how far is it from Hong Kong to London? what's a power law? — is out there somewhere.

I see today's Internet as having three primary, broad consequences: 1) information is no longer stored and retrieved by people, but is managed externally, by the Internet, 2) it is increasingly challenging and important for people to maintain their focus in a world where distractions are available anywhere, and 3) the Internet enables us to talk to and hear from people around the world effortlessly.

Before the Internet, most professional occupations required a large body of knowledge, accumulated over years or even decades of experience. But now, anyone with good critical thinking skills and the ability to focus on the important information can retrieve it on demand from the Internet, rather than her own memory. On the other hand, those with wandering minds, who might once have been able to focus by isolating themselves with their work, now often cannot work without the Internet, which simultaneously furnishes a panoply of unrelated information — whether about their friends' doings, celebrity news, limericks, or millions of other sources of distraction. The bottom line is that how well an employee can focus might now be more important than how knowledgeable he is. Knowledge was once an internal property of a person, and focus on the task at hand could be imposed externally, but with the Internet, knowledge can be supplied externally, but focus must be forced internally.

Separable from the intertwined issues of knowledge and focus is the irrelevance of geography in the Internet age. On the transmitting end, the Internet allows many types of professionals to work in any location — from their home in Long Island, from their condo in Miami, in an airport in Chicago, or even in flight on some airlines — wherever there's an Internet connection. On the receiving end, it allows for an Internet user to access content produced anywhere in the world with equal ease. The Internet also enables groups of people to assemble based on interest, rather than on geography — collaboration can take place between people in Edinburgh, Los Angeles, and Perth nearly as easily as if they lived in neighboring cities.

In the future, these trends will continue, with the development of increasingly subconscious interfaces. Already, making an Internet search is something many people do without thinking about it, like making coffee or driving a car. Within the next 50 years, I expect the development of direct neural links, making the data that's available at our fingertips today available at our synapses in the future, and making virtual reality actually feel more real than traditional sensory perception. Information and experience could be exchanged between our brains and the network without any conscious action. And at some point, knowledge may be so external, all knowledge and experience will be shared universally, and the only notion of an "individual" will be a particular focus — a point in the vast network that concerns itself only with a specific subset of the information available.

In this future, knowledge will be fully outside the individual, focus will be fully inside, and everybody's selves will truly be spread everywhere.

Artists, Media Practitioners, Curators, Editors and Catalysts of Cultural Processes


We are a collective of three people who began thinking together, almost twenty years ago, before any one of us ever touched a computer, or had logged on to the Internet.

In those dark days of disconnect, in the early years of the final decade of the last century in Delhi, we plugged into each other's nervous systems by passing a book from one hand to another, by writing in each other's notebooks. Connectedness meant conversation. A great deal of conversation. We became each other's databases and servers, leaning on each other's memories, multiplying, amplifying and anchoring the things we could imagine by sharing our dreams, our speculations and our curiosities.

At the simplest level, the Internet expanded our already capacious, triangulated nervous system to touch the nerves and synapses of a changing and chaotic world. It transformed our collective capacity to forage for the nourishment of our imaginations and our curiosities. The libraries and archives that we had only dreamt of were now literally at our fingertips. The Internet brought with it the exhilaration and the abundance of a frontier-less commons along with the fractious and debilitating intensity of de-personalized disputes in electronic discussion lists. It demonstrated the possibilities of extraordinary feats of electronic generosity and altruism when people shared enormous quantities of information on peer-to-peer network and at the same time it provided early exposure to and warnings about the relentless narcissism of vanity blogging. It changed the ways in which the world became present to us and the ways in which we became present to the world, forever.

The Internet expands the horizon of every utterance or expressive act to a potentially planetary level. This makes it impossible to imagine a purely local context or public for anything that anyone creates today. It also de-centres the idea of the global from any privileged location. No place is any more or less the centre of the world than any other anymore. As people who once sensed that they inhabited the intellectual margins of the contemporary world simply because of the nature of geo-political arrangements, we know that nothing can be quite as debilitating as the constant production of proof of one's significance. The Internet has changed this one fact comprehensively. The significance, worth or import of one's statements is no longer automatically tied to the physical facts of one's location along a still unequal geo-political map.

While this does not mean that as artists, intellectuals or creative practitioners we stop considering or attending to our anchorage in specific co-ordinates of actual physical locations, what it does mean is that we understand that the concrete fact of our physical place in the world is striated by the location's transmitting and receiving capacities, which turns everything we choose to create into either a weak or a strong signal. We are aware that these signals go out, not just to those we know and to those who know us, but to the rest of the world, through possibly endless relays and loops.

This changes our understanding of the public for our work. We cannot view our public any longer as being arrayed along familiar and predictable lines. The public for our work, for any work that positions itself anywhere vis-a-vis the global digital commons is now a set of concentric and overlapping circles, arranged along the ripples produced by pebbles thrown into the fluid mass of the Internet. Artists have to think differently about their work in the time of the Internet because artistic work resonates differently, and at different amplitudes. More often than not, we are talking to strangers on intimate terms, even when we are not aware of the actual instances of communication.

This process also has its mirror. We are also listening to strangers all the time. Nothing that takes place anywhere in the world and is communicated on the Internet is at a remove any longer. Just as everyone on the Internet is a potential recipient and transmitter of our signals, we too are stations for the reception and relay of other people's messages. This constancy of connection to the nervous systems of billions of others comes with its own consequences.

No one can be immune to the storms that shake the world today. What happens down our streets becomes as present in our lives as what happens down our modems. This makes us present in vital and existential ways to what might be happening at great distance, but it also brings with it the possibility of a disconnect with what is happening around us, or near us, if they happen not to be online.

This is especially true of things and people that drop out, or are forced to drop out of the network, or are in any way compelled not to be present online. This foreshortening (and occasionally magnification) of distances and compression of time compels us to think in a more nuanced way about attention. Attention is no longer a simple function of things that are available for the regard of our senses. With everything that comes to our attention we have to now ask - 'what obstacles did it have to cross to traverse the threshold of our considerations' - and while asking this we have to understand that obstacles to attention are no longer a function of distance.

The Internet also alters our perception of duration. Sometimes, when working on an obstinately analog process such as the actual fabrication of an object, the internalized shadow of fleeting Internet time in our consciousness makes us perceive how the inevitable delays inherent in the fashioning of things (in all their messy 'thingness') ground us into appreciating the rhythms of the real world. In this way, the Internet's pervasive co-presence with real world processes, ends up reminding us of the fact that our experience of duration is now a layered thing. We now have more than one clock, running in more than one direction, at more than one speeds.

The simultaneous availability of different registers of time made manifest by the Internet also creates a continuous archive of our online presences and inscriptions. A message is archived as soon as it is sent. The everyday generation of an internal archive of our work, and the public archive of our utterances (on online discussion lists and on facebook) mean that nothing (not even a throwaway observation) is a throwaway observation anymore. We are all accountable to, and for, the things we have written in emails or posted on online fora. We are yet to get a full sense of what this actually implies in the longer term. The automatic generation of a chronicle and a history colours the destiny of all statements. Nothing can be consigned to amnesia, even though it may appear to be insignificant. Conversely, no matter how important a statement may have appeared when it was first uttered, its significance is compromised by the fact that it is ultimately filed away as just another datum, a pebble, in a growing mountain range.

Whosoever maintains an archive of their practice online is aware of the fact that they alter the terms of their visibility. Earlier, one assumed invisibility to be the default mode of life and practice. Today, visibility is the default mode, and one has to make a special effort to withhold any aspect of one's practice from visibility. This changes the way we think about the relationship between the private memory and public presence of a practice. It is not a matter of whether this leads to a loss of privacy or an erosion of spaces for intimacy, it is just that issues such as privacy, intimacy, publicity, inclusion and seclusion are now inflected very differently.

Finally, the Internet changes the way we think about information. The fact that we do not know something that exists in the extant expansive commons of human knowledge can no longer intimidate us into reticence. If we do not know something, someone else does, and there are enough ways around the commons of the Internet that enable us to get to sources of the known. The unknown is no longer that which is unavailable, because whatever is present is available on the network and so can be known, at least nominally if not substantively. A bearer of knowledge is no longer armed with secret weapons. We have always been auto-didacts, and knowing that we can touch what we do not yet know and make it our own, makes working with knowledge immensely playful and pleasurable. Sometimes, a surprise is only a click away.

xeni jardin
Tech Culture Journalist; Partner, Contributor, Co-editor, Boing Boing; Executive Producer, host, Boing Boing Video


I travel regularly to places with bad connectivity. Small villages, marginalized communities, indigenous land in remote spots around the globe. Even when it costs me dearly, on a spendy satphone or in gold-plated roaming charges, my search-itch, my tweet twitch, my email toggle, those acquired instincts now persist.

The impulse to grab my iPhone or pivot to the laptop, is now automatic when I'm in a corner my own wetware can't get me out of. The instinct to reach online is so familiar now, I can't remember the daily routine of creative churn without it.

The constant connectivity I enjoy back home means never reaching a dead end. There are no unknowable answers, no stupid questions. The most intimate or not-quite-formed thought is always seconds away from acknowledgement by the great "out there."

The shared mind that is the Internet is a comfort to me. I feel it most strongly when I'm in those far-away places, tweeting about tortillas or volcanoes or voudun kings, but only because in those places, so little else is familiar. But the comfort of connectivity is an important part of my life when I'm back on more familiar ground, and take it for granted.

The smartphone in my pocket yields more nimble answers than an entire paper library, grand and worthy as the library may be. The paper library doesn't move with me throughout the world. The knowledge you carry with you is worth more than the same knowledge it takes more minutes, more miles, more action steps to access. A tweet query, a Wikipedia entry, a Googled text string, all are extensions of the internal folding and unfolding I used to call my own thought. But the thought process that was once mine is now ours, even while in progress, even before it yields a finished work.

That's how the Internet changed the way I think. I used to think of thought as the wobbly, undulating trail I follow to arrive at a final, solid, completed work. The steps you take to the stone marker at the end. But when the end itself is digital, what's to stop the work from continuing to undulate, pulsate, and update, just like the thought that brought you there?

I often think now in short bursts of thought, parsed out 140 characters at a time, or blogged in rough short form. I think aloud and online more, because the call and response is a comfort to me. I'm spoiled now, spoiled in the luxury of knowing there's always a ready response out there, always an inevitable ping back. Even when the ping back is sour or critical, it comforts me. It says "You are not alone."

I don't believe there's such a thing as too much information. I don't believe Google makes us dumber, or that prolonged Internet fasts or a return to faxes are a necessary part of mind health. But data without the ability to divine is useless. I don't trust algorithm like I trust intuition: the art of dowsing through data. Once, wisdom was measured by memory, by the capacity to store and process and retrieve on demand. But we have tools for that now. We made machines that became shared extensions of mind. How will we define wisdom now? I don't know, but I can ask.

NYU/ITP Adjunct Professor; Lead Technology Writer, The New York Times Bits Blog.


The Internet is not changing how we think. Instead, we are changing how the Internet thinks.

The Internet has become a real-time perpetual time capsule. A bottomless invisible urn. A storage locker for every moment of our lives, and a place to allow anyone to dip in and retrieve those memories.

The Internet has killed the private diary hiding under my under sisters mattress, and replaced it with a blog or social network.

Through the social sharing web, we have become an opt-everything society: Sharing our feelings in status updates. Uploading digital pictures of everything — good or otherwise. We discuss what we're reading or watching — and then offer brutally honest critique. We tweet the birth of a child, or announce an engagement. And we are completely unaware of the viewers we talk with. I suspect we don't even care? (I know I don't.)

We are all just a part of an infinite conversation.

And no-one stands above anyone else. The Internet gives everyone a bullhorn, and allows them to use it freely, wherever they see fit — to say whatever they want. In the past bullhorns were expensive, as were printing presses, or television studios and radio stations. To reach large audiences required deep pockets. But now we are all capable of distributing our voices, opinions and thoughts evenly.

When everyone has a bullhorn, no-one individual can shout louder than the others, instead, it just becomes a really loud conversation.

The Web is capable of spreading information quicker than any virus known to man, and it's impossible to stop. Without these confabulations, the Web would be an empty wasteland of one-sided conversation, just like newspapers and television programs and radio stations used to be.

Most importantly now, the Web allows for an equilibrium of chatter. People use the same services to share and consume their vastly divergent views and interests and then, in-turn, dice up the information accordingly.

The Internet has changed the way we think through numerous channels. But it's changed the way I think through one very simple action: Every important moment of my life is documented, cataloged, and sent online to be shared and eulogized with however wants to engage in the conversation.

Actor, Writer, Director; Host of PBS program The Human Spark


Telephones make me anxious for some reason — so, ever since I've been able to communicate over the Web I've seldom gone near the phone. But something strange has happened. At least once a day I have to stop and think about whether what I've just written can be misinterpreted. In email, there's no instant modulation of the voice that can correct a wrong tone as there is on the phone, and even though I avoid irony when emailing anyone who's not a professional comedian or amateur curmudgeon, I sometimes have to send a second note to un-miff someone. This can be a problem with any written communication, of course, but email, Web postings, and texting all tempt us with speed. And that speed can cost us clarity. This is not so good because, increasingly, we communicate quickly, without the sound that modulating voice. I'm even one of those people who will email someone across the room.

In addition, the Internet has connected so many millions of us into anonymous online mobs that the impression that something is true can be created simply by the sheer number of people who repeat it. (In the absence of other information, a crowded restaurant will often get more diners than an empty one, not always because of the quality of the food.)

Speed plus mobs. A scary combination. Together, will they seriously reduce the accuracy of information and our thoughtfulness in using it?

Somehow, we need what taking our time used to give us: thinking before we talk and questioning before we believe.

I wonder: is there an algorithm perking somewhere in someone's head right now that can act as a check against this growing hastiness and mobbiness? I hope so. If not, I may have to start answering the phone again.

Chinese Artist; Curator; Architectural Designer (The Bird's Nest); Cultural And Social Commentator; Activist


I only think on the Internet anymore. My thinking is now divided into on the net and off the net. If I'm not on the net, I don't think that much; when I'm on the net, I start to think. In this way, my thinking becomes always part of something else.


| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |

index >