| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |

next >

Internet Entrepreneur; Founder, Mahalo.com


As a former journalist I used to withhold judgment and refrain from speculating about breaking stories until "all the facts" were in. I used to keep a mental scorecard of an issue with the confirmed facts neatly organized. However, with the velocity of information and tools to curate and process it on the Internet, I've moved to speculation as my scorecard.

The "real time" Web means we get to flip our positions, argue all sides of a debate and test theories.

We're being lied to and manipulated more than we're being told the truth, so instead of trying to figure out what's true I'd rather speculate in my social network and see what comes back.

When the shooting at Ford Hood happened, I immediately speculated on Facebook and Twitter that Nidal Malik Hasan's name was probably an indication of a terrorism link — it couldn't be coincidence right? That was the first thing you thought right? Dozens of responses came back, outraged that I would speculate to my 80,000 followers without "knowing for sure."

Most claimed we should wait until the authorities completed their investigation. A couple of folks thought I was showing some bias against Muslims, which, of course I was!

Any investigator would follow the radical Muslim pattern when faced with the same evidence, and certainly the newscasters on CNN were thinking it. The terrorism connection at Fort Hood was so obvious that the CNN reporters made a point of saying that just because the name sounded like the names of 9/11 hijackers we shouldn't jump to conclusions. Really? Isn't that exactly what the investigators did? Isn't that what the Internet was doing while CNN anchors fumbled their way through the moment, trying to fill air time with anything BUT speculation about radical muslims.

They've tracked Hasan's connections to a mosque in Virginia where two of the September 11th hijackers attended services. Speculation on the Internet was correct this time, and CNN was doing "the responsible thing" by not participating in it. Really? Doesn't speculation lead to debate which leads to, hopefully, some resolution?

Jumping to conclusions is a critical piece of information gathering, and we should be doing it more — not less. The Internet is built to route around bad routers and bad facts. Hasan's business card had "SoA" on it, which stands for, wait for it, "Soldier of Allah." If only someone had jumped to some conclusions about that fact on their Twitter account.

Consuming passive news gave way to commenting on blogs in 2003 and 2004. Now we all have blogs tethered to our mobile phones, even if they are micro in nature, with Facebook and Twitter accounts. We shouldn't wait for facts, we should be speculating and testing assumptions as news and knowledge unfolds.

Facts are, of course, valuable, but speculation gets me further and builds better Webs in my mind.

We've moved from jury to investigators, and the audience is on stage. Support thought bombs and the people who throw them into your social graph. It's messy, but essential. Study the reactions on either side of the aisle because reactions can be more telling than the facts sometimes. That's how the Internet has change my thinking: trust nothing, debate everything.

Cognitive Neuroscientist and Philosopher, Harvard University


Have you ever read a great book from before the mid 1990s and thought to yourself, "My Goodness! These ideas are so primitive! So… pre-Internet!" Me neither. The Internet hasn't changed the way we think anymore than the microwave oven has changed the way we digest food. The Internet has provided us with unprecedented access to information, but it hasn't changed what we do with it once it's made it into our heads. This is because the Internet doesn't (yet) know how to think. We still have to do it for ourselves, and we do it the old-fashioned way.

One of the Internet's early disappointments was the now defunct Website "Ask Jeeves." (It was succeeded by Ask.com, which dropped Jeeves in 2006) Jeeves appeared as a highly competent infobutler who can understand and answer questions posed in natural language. ("How was the East Asian economy affected by the Latin American debt crisis?" "Why do fools fall in love?") Anyone who spent more than a few minutes querying Jeeves quickly learned that Jeeves himself didn't understand squat. Jeeves was just a search engine like the rest, mindlessly matching the words contained in your question to words found on the Internet. The best Jeeves could do with your profound question—the best any search engine can do today—is direct you to the thoughts of another human being who has already attempted to answer a question related to yours. This is not to say that cultural artifacts can't change the way we think.

Jim Flynn has documented massive gains in IQ over the 20th Century (the "Flynn Effect"), which he attributes to our enhanced capacity for abstract thought, which he in turn attributes to the cognitive demands of the modern marketplace. Why hasn't the Internet had a comparable effect? The answer, I think, is that the roles of master and servant are reversed. We place demands on the Internet, but the Internet hasn't placed any fundamentally new demands on us. In this sense, the Internet really is like a butler. It gives us the things that we want faster and with less effort, but it doesn't give us anything that we couldn't otherwise get for ourselves and doesn't require us to do anything more than give comprehensible orders.

Someday we'll have a nuts-and-bolts understanding of complex abstract thought, which will enable us to build machines that can do it for us, and perhaps do it better than we do, and perhaps teach us a thing or two about it. But until then, the Internet will continue to be nothing more, and nothing less, than a very useful, and very dumb, butler.

Computer Scientist, UC Berkeley, School of Information; Author, Search User Interfaces


In graduate school, as a computer scientist whose focus was on search engines even before the Web, I always dreamed of an Internet that would replace the inefficiencies of libraries, making all important information easily available online. This amazingly came to pass, despite what seemed like insurmountable blockages in the early days.

But something I did not anticipate is how social the Internet would become. When the Web took off, I expected to see recipes online. But today I also expect to learn what other people thought about a recipe, including what ingredients they added, what salad they paired it with and who in their family liked or disliked it. This multitude of perspectives has made me a better cook.

Now if I enjoy a television show, within minutes or hours of the air time of the latest episode, I expect to be able to take part in a delightful, informed conversation about it, anchored by an essay by a professional writer, supported with high-quality user-contributed comments that not only enhance my pleasure of the show, but also reveal new insights.

And I can not only get software online, but in the last few years a dizzying cornucopia of free software components have appeared, making it possible to do research and development in days that would have taken months or years in the past. There have always been online forums to discuss software — in fact, coding was unsurprisingly one of the most common topics of early online groups. But the variety and detail of the kind of information that other people selflessly supply each other with today is staggering. And the design of online question-answering sites has moved from crufty to excellent in just a few years.

Most relevant to the scientists and researchers who contribute to the Edge question, we see the use of the Web to enhance communication in the virtual college, with academic meetings being held online, math proofs being done collaboratively on blogs, and deadly viruses being isolated within weeks by research labs working together online.

Sure, we used email in the early eighties, and there were online bulletin boards for at least a decade before the Web, but only a small percentage of the population used them, and usually over a very slow modem. In the early days of the Web, ordinary people's voices were limited primarily to information ghettos like Geocities; most text was produced by academics and businesses. There was very little give-and-take. By contrast, according to a 2009 Pew study, 51% of Internet users now post content online that they have created themselves, and 1 in 10 Americans post something online for others to see every day.

Of course, the increased participation means that there is an increase in the equivalent of what we used to call flame wars, or generally rude behavior, as well as a proliferation of false information and gathering places for people to plan and encourage hurtful activities. Some people think this ruins the Web, but I disagree. It's what happens when everyone is there.

Interestingly, the Edge Question, while innovative in format when it started, still does not allow readers to comment on the opinions offered. I am not saying if this is a good or a bad thing. The Edge Foundation's goal is to increase public understanding of science by encouraging intellectuals to "express their deepest thoughts in a manner accessible to the intelligent reading public." I just wonder if it is time to embrace the new Internet and let that public write back.

Dinosaur paleontologist and science communicator; Author: Dinosaur Odyssey: Fossil Threads in the Web of Life


Like many others, my personal experience is that the Internet is both the Great Source for information and the Great Distractor, fostering compulsions to stay "connected," often at the expense of other, arguably more valuable aspects of life. I do not sense that the Internet alters the way that I think as much as it does the way I work; having the Great Source close at hand is simply irresistible, and I generally keep a window open on my laptop for random searches that pop into my head.

Nevertheless, I am much less concerned about "tweeners" like me who grew up before the Internet than I am with children of the Internet age, so-called "Digital Natives." I want to know how the Internet changes the way they think. As will no doubt be confirmed by answers to the Edge Annual Question, the jury is still out. Although the supporting research may still be years away, it seems likely that a lifetime of daily conditioning dictated by the rapid flow of information across glowing screens will generate substantial changes in brains, and thus thinking. Commonly cited potential effects include fragmented thinking and shorter attention spans together with a concomitant reduction (let alone interest) in reflection, introspection, and in-depth thought. Another oft-noted concern is the nature of our communications, which are becoming increasingly terse and decreasingly face-to-face.

But I have a larger fear, one rarely mentioned in these discussions—the extinction of experience. This term, which comes from author Robert Michael Pyle, refers to the loss of intimate experience with the natural world. Clearly, anyone who spends 10-plus hours each day with their attention focused on a screen is not devoting much time to experiencing the "real" world. More and more, it seems, real-life experience is being replaced by virtual alternatives. And, to my mind at least, this is a grave problem. Let me explain.

As the first generation to contemplate the fact that humanity may have a severely truncated future, we live at arguably the most pivotal moment in the substantial history of Homo sapiens. Decisions made and actions taken during the next generation will have an imbalanced impact on the future of humans and all other life on Earth. If we blunder onward on our present course—increasing populations, poverty, greenhouse gas emissions, and habitat destruction—we face no less than the collapse of civilization and the decimation of the biosphere. Given the present dire circumstances, any new far-reaching cultural phenomenon must be evaluated in terms of its ability to help or hinder the pressing work to be done; certainly this concern applies to how the Internet influences thinking.

Ecological sustainability, if it is to occur, will include greener technologies and lifestyles. In addition, however, we require a shift in worldview that re-configures our relationship with non-human nature. To give one prominent example of our current dysfunctional perspective, how are we to achieve sustainability as long as we see nature as part of the economy rather than the inverse? Instead of a collection of resources available for our exploitation, nature must become a community of relatives worthy of our respect and a teacher to whom we look for inspiration and insight. In contrast to the present day, sustainable societies will likely be founded on local foods, local materials, and local energy. They will be run by people who have a strong passion for place and a deep understanding of the needs of those places. And I see no way around the fact that this passion and understanding will be grounded in direct, firsthand experiences with those places.

My concern, then, is this: How are we to develop new, more meaningful connections to our native communities if we are staring at computer screens that connect us only to an amorphous worldwide "community?" As is evident to anyone who has stood in a forest or on a seashore, there is a stark difference between a photograph or video and the real thing. Yes, I understand the great potential for the Internet to facilitate fact-finding, information sharing, and even community-building of like-minded people. I am also struck by the radical democratization of information that the Internet may soon embody. But how are we to establish affective bonds locally if our lives are consumed by virtual experiences on global intermedia? What we require is uninterrupted solitude outdoors, sufficient time for the local sights, sounds, scents, tastes, and textures to seep into our consciousness. What we are seeing is children spending less and less time outdoors actually experiencing the real world and more and more time indoors immersed in virtual worlds.

In effect, my argument is that the Internet may influence thinking indirectly through its unrelenting stranglehold on our attention and the resultant death (or at least denudation) of non-virtual experience. If we are to care about larger issues surrounding sustainability, we first must care about our local places, which in turn necessitates direct experiences in those places. As Pyle observes, "what is the extinction of the condor to a child who has never known a wren?"

One thing is certain. We have little time to get our act together. Nature, as they say, bats last. Ultimately, I can envision the Internet as a Net positive or a Net negative force in the critical sustainability effort, but I see no way around the fact that any positive outcome will involve us turning off the screens and spending significant time outside interacting with the real world, in particular the nonhuman world.

Physicist, former President, Weizmann Institute of Science; Author, A View from the Eye of the Storm


It is entirely possible that the Internet is changing our way of thinking in more ways than I am willing to admit, but there are three clear changes that are palpable:

The first is the increasing brevity of messages.

Between Twittering, chatting and sending abbreviated Blackberry e-mails, the "old" sixty-second sound byte of TV newscasts is now converted into one-liners, attempting to describe ideas, principles, events, complex situations and moral positions.

Even when the message itself is somewhat longer, the fact that we are exposed to more messages, than ever before, means that the attention "dose" allocated to each item is tiny. The result, for the general public, is a flourishing of extremist views on everything. Not only in politics, where only the ideas of the lunatic far left and the crazy far right can be stated in one sentence, but also in matters of science.

It is easy to state in one sentence nonsense such as "the theory of evolution is wrong", "global warming is a legend", "immunization causes Autism" and "God (mine, yours, or hers) has all the answers". It requires long essays to explain and discuss the "ifs" and "buts" of real science and of real life.

I, personally, find that this trend makes me a fanatic anti-extremist. I am boiling mad whenever I see or read such telegraphic (to use an ancient terminology) elaborations of ideas and facts, knowing that they are so wrong and misleading, and, at the same time, they find their ways into so many hearts and minds. Even worse, people who are still interested in a deeper analysis and a balanced view of topics, whether scientific, social, political or other, are considered leftovers from an earlier generation, and are labeled as extremists of the opposite color, by the fanatics of one corner or another.

The second change is the diminishing role of factual knowledge, in the thinking process.

The thought pattern of different people, on different subjects, requires varying mixtures of knowing facts, being able to correlate them, creating new ideas, distinguishing between important and secondary matters, knowing when to prefer pure logic and when to let common sense dominate, analyzing processes and numerous other components of a complex mental exercise.

The Internet allows us to know fewer facts, being sure that they are always literally at our fingertips, thus reducing their importance as a component of the thought process. This is similar to, but much more profound than, the reduced role of pure computation and simple arithmetic with the introduction of calculators.

But we should not forget that, often, in the scientific discovery process, the greatest challenge is to ask the right question, rather than answer a well posed question, and to correlate facts that no one thought of connecting. The existence of many available facts, somewhere in the infinite ocean of the Internet, is no help in such an endeavor. I find, personally, that my scientific thinking is changed very little by the availability of all of these facts, but my attitude towards social, economic and political issues is enriched by having many more facts at my disposal.

An important warning is necessary here: A crucial enhanced element of the thought process, demanded by the flood of available facts, must be the ability to evaluate the credibility of "facts" and of "quasi-facts". Both are abundant in the Web and telling them apart is not as easy as it may sound.

The third change is in the entire process of teaching and learning.

Here it is clear that the change must be profound and multifaceted, but it is equally clear that, due to the ultraconservative nature of the educational system, it has not yet happened on a large scale.

The Internet brings to us art treasures, ability to simulate complex experiments, mechanisms of learning by trial and error, explanations and lessons from the greatest teachers on earth, special aids for children of special needs, less need to memorize facts and numbers, and numerous other incomparable marvels, not available to previous generations. Anyone involved in teaching, from kindergarten to graduate school, must be aware of the endless opportunities, as well as of the lurking dangers. These changes in learning, when they materialize, may create an entirely different pattern of knowledge, understanding and thinking in the student mind.

I am personally amazed by how little has changed in the world of education, but, whether we like it or not, the change must happen and it will happen. It may take another decade or two, but education will never be the same. An interesting follow-up issue, to this last comment, is the question whether the minds and brains of children growing up in an Internet inspired educational system, will be physically "wired" differently than those of earlier generations. I tend to speculate in the affirmative, but this may only be answered by the Edge question of 2040.

Media Analyst; Documentary Writer; Author, Life, Inc.


How does the Internet change the way I think? It puts me in the present tense. It's as if my cognitive resources are shifted from my hard drive to my RAM. That which is happening right now is valued, and everything in the past or future becomes less relevant.

The Internet pushes us all toward the immediate. The now. Every inquiry is to be answered right away, and every fact or idea is only as fresh as the time it takes to refresh a page.

And as a result, speaking for myself, the Internet makes me mean. Resentful. Short-fused. Reactionary.

I feel it when I'm wading through a stack of emails, keeping up with an endless Twitter feed, accepting Facebook "friends" from a past I prefer not to remember, or making myself available on the Web to readers to whom I should feel grateful — but instead feel obligated. And it's not a matter of what any of these folks might want me to do, but when. They want it now.

This is not a bias of the Internet itself, but of the way it has changed from an opt-in activity to an "always on" condition of my life. The bias of medium was never towards real-time activity, but towards time shifting. Unix, the operating system of the Net, doesn't work in real time. It sits and waits for human commands. Likewise, early Internet forums and bulletin boards were discussions users returned to at their convenience. I dropped in the conversation, then came back the next evening or next week to see how it had developed. I took the time to consider what I might say — to contemplate someone else's response. An Internet exchange was only as rich as the amount of time I allowed to pass between posts.

Once the Internet changed from a resource at my desk into an appendage chirping from my pocket and vibrating on my thigh, however, the value of depth was replaced by that of immediacy masquerading as relevancy. This is why Google is changing itself from a search engine to a "live" search engine, why email devolved to SMS and blogs devolved to tweets. It's why schoolchildren can no longer engage in linear arguments, why narrative structure collapsed into reality TV, why and why almost no one can engage in meaningful dialogue about long-term global issues. It creates an environment where a few incriminating emails between scientists generate so more news than our much slower but more significant climate crisis.

It's as if the relentless demand of networks for me to be everywhere, all the time, denies me access to the moment in which I am really living. And it is this sense of disconnection — more than distraction, multi-tasking, or long-distance engagement — that makes the Internet so aggravating.

In some senses, this was the goal of those who developed the computers and networks on which we depend today. Technology visionaries such as Vannevar Bush and James Licklider sought to develop machines that could do our remembering for us. Computers would free us from the tyranny of the past — as well as the horrors of World War II — allowing us to forget everything and devote our minds to solving the problems of today. The information would still be there — it would simply be stored out of body, in a machine.

And that may have worked had technological development leaned towards the option of living life disconnected from those machines whenever access to their memory banks was not required. Instead, I feel encouraged to use networks not just to access information, but to access other people, and to grant them access to me — wherever and whenever I happen to be.

This always-on approach to digital technology surrenders my nervous system rather than expanding it. Likewise, the simultaneity of information streaming towards me prevents parsing or consideration. It becomes a constant flow which must be managed, perpetually.

The now-ness of the Internet engenders impulsive, unthinking responses over considered ones, and a tendency to think of communications as a way to bark orders or fend off those of others. I want to satisfy the devices chirping and vibrating in my pockets, only to make them stop. Instead of looking at each digital conversation as an opportunity for depth, I experience them as involuntary triggers of my nervous system. Like my fellow networked humans, I now suffer the physical and emotional stresses previously associated with careers such as air traffic controllers and 911 operators.

By surrendering my natural rhythms to the immediacy of my networks, I am optimizing myself and my thinking to my technologies — rather than the other way around. I feel as though I speeding up, when I am actually just becoming less productive, less thoughtful, and less capable of asserting any agency over the world in which I live. The result something akin to future shock. Only in our era, it's more of a present shock.

I try to look at the positive: Our Internet-enabled emphasis on the present may have liberated us from the 20th century's dangerously compelling ideological narratives. No one — well, hardly anyone — can still be persuaded that brutal means are justified by mythological ends. And people are less likely to believe employers' and corporations' false promises of future rewards for years of loyalty now.

But, for me anyway, it has not actually brought me into greater awareness of what is going on around me. I am not approaching some Zen state of an infinite moment, completely at one with my surroundings, connected to others, and aware of myself on any fundamental level.

Rather, I am increasingly in a distracted present, where forces on the periphery are magnified and those immediately before me are ignored. My ability to create a plan — much less follow through on it — is undermined by my need to be able to improvise my way through any number of external impacts which stand to derail me at any moment. Instead of finding a stable foothold in the here and now, I end up reacting to ever-present assault of simultaneous impulses and commands.

The Internet tells me I am thinking in real time, when what it really does, increasingly, is take away the real and take away the time.

Computational Neuroscientist, Salk Institute, Coauthor, The Computational Brain


What is the impact of spending hours each day in front of a monitor, surfing the Internet and playing games?  Brains are highly adaptable and experiences have long-term effects on the brain's structure and function. You are aware of some of the changes and call it your memory, but this is just the tip of the iceberg. We are not aware of more subtle changes, which nonetheless can affect your perception and behavior. These changes occur at all levels of your brain, from the earliest perceptual levels to the highest cognitive levels. 

Priming is a dramatic example of unconscious learning, in which a brief exposure to an image or a word can affect how you respond to the same image or word, even in degraded forms, many months later. In one experiment, the outlines of animals and other familiar objects were viewed briefly and 17 years later the subjects could still identify the animals and objects above chance levels from versions in which half the outlines were erased. Some of the subjects did not remember participating in the original experiment. With conceptual priming, an object like a table can prime the response to a chair. Interestingly, priming decreases reaction times and is accompanied by a decrease in brain activity — it becomes faster and more efficient.

Brains, especially youthful ones, have an omnivorous appetite for information, novelty and social interaction, but it is less obvious why we are so good at unconscious learning. One advantage is that it allows the brain to build up an internal representation of the statistical structure of the world, whether it is the frequency of neighboring letters in words or the textures, forms and colors that make up images. Brains are also adept at adapting to sensorimotor interfaces. We first adapted to clunky keyboards, then to virtual pointers to virtual files, and now to texting with fingers and thumbs. As you become an expert at using it, the Internet, as with other tools, becomes an extension of your brain. 

Are the changes occurring in your brain as you interact with the Internet good or bad for you?  Adapting to the touch and feel of the Internet makes it easier to extract information, but a better question is whether the changes in your brain will improve your fitness. There was a time, no long ago, when the heads of corporations did not use the Internet because they never learned to type, but they are going extinct and have been replaced with more Internet savvy managers.

Gaining knowledge and skills should benefit survival, but not if you spend all of your time immersed in the Internet. The intermittent rewards can become addictive, hijacking your dopamine neurons that predict future rewards. The Internet, however, has not been around long enough, and is changing too rapidly, to know what the long-term effects will be on brain function. What is the ultimate price for omniscience?

Cognitive Scientist, UC, Irvine; Author, Visual Intelligence


Human thought has many sculptors, and each wields special tools for distinct effects. Is the Internet in the tool kit? That depends on the sculptor.

Natural selection sculpts human thought across generations and at geologic time scales. Fitness is its tool, and human nature, our shared endowment as members of a species, is among its key effects. Although the thought life of each person is unique, one can discern patterns of thought that transcend racial, cultural and occupational differences; similarly, although the face of each person is unique, one can discern patterns of physiognomy — two eyes above a nose above a mouth — that transcend individual differences.

Is the Internet in the tool kit of natural selection? That is, does the Internet alter our fitness as a species? Does it change how likely we are to survive and reproduce? Debate on this question is in order, but the burden is surely on those who argue no. Our inventions in the past have altered our fitness: arrow heads, agriculture, the control of fire. The Internet has likely done the same.

But has the Internet changed the patterns of thought that transcend individual differences? Not yet. Natural selection acts over generations; the Internet is but one generation old. The Internet is in the tool kit, but has not yet been applied. Over time, as the Internet rewards certain cognitive skills and ignores or discourages others, it could profoundly alter even the basic patterns of thought that we share as a species. The catch, however, is "over time." The Internet will evolve new offspring more quickly than Homo sapiens and they, rather than the Internet, will alter human nature. These offspring will probably no more resemble the Internet than Homo sapiens resembles amoebas.

Learning sculpts human thought across the lifetime of an individual. Experience is its tool, and unique patterns of cognition, emotion and physiology are its key effects. Marcel Just and Timothy Keller found that poor readers in elementary school can dramatically improve their skills with six months of intensive training, and that white matter connections in the left hemispheres of their brains are measurably increased in the process.

There are, of course, endogenous limits to what can be learned, and these limits are largely a consequence of mutation and natural selection. A normal infant exposed to English will learn to speak English, but the same infant exposed to C++ or HTML will learn little.

Is the Internet in the tool kit of learning? No doubt. Within the endogenous limits of learning set by one's genetic inheritance, exposure to the Internet can alter how one thinks no less than can exposure to language, literature or mathematics. But the endogenous limits are critical. Multi-tasking, for instance, might be a useful skill for exploiting in parallel the varied resources of the Internet, but genuine multi-tasking, at present, probably exceeds the limitations of the attentional system of Homo sapiens. Over generations, this limitation might ease. What the Internet cannot accomplish as a tool of learning, it might eventually accomplish as a tool of natural selection.

Epigenetics sculpts human thought within a lifetime and across a few generations. Experience and environment are its guides and shifts in gene expression that trigger shifts in cognition, emotion and physiology are its relevant effects. Oberlander and colleagues found that a mother's anxiety can change the expression of the NR3C1 gene in her child, leading to the child's increased reactivity to stress. Childhood abuse can similarly lead to persistent feelings of anxiety and acute stress in a child, fundamentally altering its thought life.

Is the Internet in the toolkit of epigenetics? Possibly, but no one knows. The field of epigenetics is young, and even the basic mechanisms by which transgenerational epigenetic effects are inherited are not well understood. But the finding that parental behavior can alter gene expression and thought life in a child certainly leaves open the possibility that other behavioral environments, including the Internet, can do the same.

Thus, in sum, the relevance of the Internet to human thought depends on whether one evaluates this relevance phylogenetically, ontogenetically or epigenetically. Debate on this issue can be clarified by specifying the framework of evaluation.

Philosopher; Author, The Ego-Tunnel


I heard a strange, melodic sound from the left and turned away from the Green Woman. As I shifted my gaze towards the empty landscape, I noticed that something wasn't quite right. The new visual scene, the hills and the trees, were as real as it could be — but somehow it just hadn't come into sight as it would in real life, had I turned my head as I would normally. Somehow it wasn't quite real-time. The way the visual scene popped up had a slightly different temporal dynamics, an almost unnoticeable delay — as if I was surfing the Web, clicking my way on to another page. But I certainly wasn't surfing! I had just talked to the Green Woman, and no!, my right index finger wasn't clicking, and my right hand certainly wasn't lying on a mouse pad — it hung down from the side of my body, completely relaxed, as I gazed into the empty landscape of hills and trees. In a flash of excitement and disbelief it dawned on me: I was dreaming!

Lucid dreams are something I have always been interested in, and have written about extensively. For consciousness researchers lucid dreams are interesting, because you can go for a walk through the dynamics of your own neural correlate of consciousness, unconstrained by external input, and look at the way it unfolds, from the inside. For philosophers they are certainly interesting too. You can ask dream characters you encounter what they think about notions like "virtual embodiment" and "virtual selfhood" — and if they actually believe they have a mind of their own. Unfortunately, I have lucid dreams only rarely — once or twice a year. The episode above was the beginning of my last one, and a lot of things dawned on me at once, not just the fact that I was actually all inside my own head: The Internet is reconfiguring my brain, it changes not only the way in which I think. The influence is much deeper; it already penetrates my dream life. Sure, for academics the Internet is a fantastic resource — almost all of the literature at your fingertips, unbelievably efficient ways of communicating and cooperating with researchers around the world, an endless source of learning and inspiration. Something that leads you right into attention deficit disorder. Something that gets you hooked. Something that confronts you with your greed. Something that is already changing us in our deepest core.

This is about much more than cognitive style alone: For those of us intensively working with it, the Internet has already become a part of our self-model. We use it for external memory storage, as a cognitive prosthesis, and for emotional autoregulation. We think with the help of the Internet, and it helps us determine our desires and goals. Affordances infect us, subtly eroding the sense of control. We are learning to multitask, our attention span is becoming shorter, and many of our social relationships are taking on a strangely disembodied character. Some software tells us "You are now friends with Peter Smith!" — when we were just too shy to click the "Ignore" button.

"Online addiction" has long become a technical term in psychiatry. Many young people (including an increasing number of university students) suffer from attention deficits and are no longer able to focus on old-fashioned, serial symbolic information; they suddenly have difficulty reading ordinary books. Everybody has heard about midlife burnout and rising levels of anxiety in large parts of the population. Acceleration is everywhere.

The core of the problem is not cognitive style, but something else: attention management. The ability to attend to our environment, to our own feelings, and to those of others is a naturally evolved feature of the human brain. Attention is a finite commodity, and it is absolutely essential to living a good life. We need attention in order to truly listen to others — and even to ourselves. We need attention to truly enjoy sensory pleasures, as well as for efficient learning. We need it in order to be truly present during sex, or to be in love, or when we are just contemplating nature. Our brains can generate only a limited amount of this precious resource every day. Today, the advertisement and entertainment industries are attacking the very foundations of our capacity for experience, drawing us into the vast and confusing media jungle. They are trying to rob us of as much of our scarce resource as possible, and they are doing so in ever more persistent and intelligent ways. We know all that. But here is something we are just beginning to understand — that the Internet affects our sense of selfhood, and on a deep functional level.

Consciousness is the space of attentional agency: Conscious information is exactly that information in your brain to which you can deliberately direct your attention. As an attentional agent, you can initiate a shift in attention and, as it were, direct your inner flashlight at certain targets: a perceptual object, say, or a specific feeling. In many situations, people lose the property of attentional agency, and consequently their sense of self is weakened. Infants cannot control their visual attention; their gaze seems to wander aimlessly from one object to another, because this part of their Ego is not yet consolidated. Another example of consciousness without attentional control is the non-lucid dream state. In other cases, too, such as severe drunkenness or senile dementia, you may lose the ability to direct your attention — and, correspondingly, feel that your "self" is falling apart.

If it is true that the experience of controlling and sustaining your focus of attention is one of the deeper layers of phenomenal selfhood, then what we are currently witnessing is not only an organized attack on the space of consciousness per se but a mild form of depersonalization. New medial environments may therefore create a new form of waking consciousness that resembles weakly subjective states — a mixture of dreaming, dementia, intoxication, and infantilization. Now we all do this together, every day. I call it Public Dreaming.

Independent Researcher; Author, Dinosaurs of the Air


Being among those who have predicted that humans will be uploading their minds into cybermachines in the not too distant future, one might assume I'm enthusiastic about the Internet. But the thinking of my still primate mind about the new mode of information exchange is more ambiguous.

No doubt the Internet is changing the way I operate and influence the world around me. Type "gregory paul religion and society" into Google and nearly four million hits come up. I'm not entirely sure what that means, but it looks impressive. An article in a Brit newspaper on my sociological research garnered over 700 comments. Back in the 20th century I could not imagine my technical research making such an impression on the global sociopolitical scene because the responsible mechanism – publishing in open access online academic journals – was not available. The new communication environment is undoubtedly altering my research and publicity strategy relative to what it would be in a less digital world. Even so, I am not entirely sure how my actions are being modified. The only way to find out would be to run a parallel universe experiment in which everything is the same except for the existence of an Internet type of communications, and see what I do in the alternative situation.

What is disturbing to this human raised on hard copy information transmission is how fast the Internet is destroying a large portion of the former. My city no longer has a truly major newspaper, and the edgy, free City Paper is a pale shadow of its former self in danger of extinction. I have enjoyed living a few blocks from a major university library because I could casually browse through the extensive journal stacks, leafing through assorted periodicals to see what was up in the latest issues. Because the search was semi-random it was often pleasantly and usefully serendipitous. Now that the Hopkins library has severely cut back on paper journals as the switch to online continues it is less fun. It's good to save trees, and looking up a particular article is often easier online, but checking the contents of latest issue of Geology on the library computer is neither as pleasant nor convenient. I suspect that the range of my information intake has narrowed, and that can't be good.

On the positive side, it could be amazingly hard to get basic info before the Web showed up. In my teens I was intrigued by the notorious destruction of the HMS Hood in 1941, but was not able to get a clear impression of the famed vessel's appearance for a couple of years until I saw a friend's model, and I did not see a clear image until well after that. Such extreme data deprivation is thankfully over due to Wikipedia, etc. But even the Internet cannot fill all information gaps. It often remains difficult to search out obscure details of the sort found only in books that can look at subjects in depth. Websites often reference books, but if the Internet limits the production of manuscript length works then the quality of information is going to suffer.

As for the specific question of how the Internet is changing my thinking, online apps facilitate the statistical analyses that are expanding my sociological interests and conclusions further than I ever thought they would go, leading to unanticipated answers to some fundamental questions about popular religion that I am delighted to uncover. Beyond that there are more subtle effects, but exactly what they are I am not sure sans the parallel world experiment. I also fear that the brevity favored by on screen versus page turning reading is shortening my attention span. It is as if one of Dawkins's memes is altering my unwilling mind like a bad science fiction story. But that's a non-quantitative, anecdotal impression; perhaps I just think my thinking has changed. It is possible the new arrangement is not altering my mental exertions further than it is because the old fashioned mind generated by my brain remains geared to the former system.

The new generation growing up immersed in the digital complex may be developing thinking processes more suited for the new paradigm for better or for worse. But as far as I know that's a hypothesis rather than a documented fact. Perhaps human thinking is not as amenable to being modified by external factors as one might expect. And the Internet may be more retro than it first seems. The mass media of the 20th century was truly novel because the analog based technology turned folks from home entertainers and creators (gathering around the piano and singing and inventing songs and the like) to passive consumers of a few major outlets (sitting around the telly and fighting over the remote). People are using hyperfast digital technology to return to self-creativity and entertainment. How all this is affecting young psyches is a matter for sociobehavioral and neuropsychological research to sort out.

But how humans old and young are effected may not matter all that much. In the immediacy of this early 21st century moment the Internet revolution may look more radical than it actually is, it could merely introduce the real revolution. The human domination of digital communications will be a historically transitory event if and when high-level thinking cyberminds start utilizing the system. The ability superintelligences to share and mull over information will dwarf what mere humans can manage. Exactly how will the interconnected uberminds think?

Hell if I know.

| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |

next >