| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |

next >

Publisher of Skeptic magazine, monthly columnist for Scientific American; Author, The Mind of the Market


In the 1980s I was a competitive bicycle racer, competing five times in the 3,000-mile nonstop transcontinental Race Across America, an event that Outside magazine called "the world's toughest sporting event." I felt that the playing field was level because in a pure sport such as cycling (this was before the days of sophisticated doping programs) it doesn't matter what your last name is, what schools you attended, how much money your parents have, which country clubs you belong to, your politics, religion, or socio-economic status, or any other social conventions. It only matters how fast you can pedal your bike. Full stop. Cycling is as close to a pure meritocracy as there is.

In my intellectual pursuits, however, I never felt that the playing field was level. In academia especially, but in other careers as well (most notably politics and corporate business), your name, money, connections, social standing, religion, and especially which institutions you are affiliated with do seem to matter…a lot. Pure skill and talent, while important, often seem to play second fiddle in the orchestral arrangement of society. The Internet is changing this.

Thanks to the Internet, for the first time in my life I feel that I have a chance to compete on a level playing field. My academic background is embarrassing compared to that of most successful intellectuals. My public high school education was so abysmal that I had to attend to a community college in California for two years before matriculating at the (then) reputationless Pepperdine University. I scraped together a master's degree through the second-tier California State University system, and finally gave up hope for an intellectual life and raced bikes for a decade. By the time I earned a Ph.D. from the distinctly non-elitist Claremont Graduate University, I discovered there were next to no jobs, especially for someone with an intellectual pedigree such as mine. Since teaching as an adjunct professor is no way to make a living (literally), I founded the Skeptics Society and Skeptic magazine just as the Internet was getting legs in the early-1990s.

Starting with no money, no backers, and no affiliation with elite institutions, the Internet made it possible for us to succeed by making knowledge accessible and searchable to me and my editors and writers on a scale never previously available. The intellectual playing field was being leveled and the Internet changed the way I think about the very real possibility of fairness and opportunity in a world that has for too long been rigged to favor the elite.

Who needs brick and mortar libraries when knowledge is available at fingertips' notice? Who needs acceptance into elite universities when the same knowledge is searchable by anyone from anywhere? Who needs access to exclusive clubs when knowledge is no longer the province of just the privileged? We're not all the way there yet, but the Internet is leveling the knowledge playing field by democratizing access to information.

This is real power, and I feel that power as never before.

Biologist, Distinguished University Professor, UMass, Amherst; Coauthor (with Dorion Sagan), Acquiring Genomes: A Theory of the Origins of Species


By using the Internet I have renewed or begun new epistolary interactions on a global basis with superb, knowledgeable scientists and historians. The Internet has made quickly available much obscure, scientific literature relevant and invaluable to me. It has generated new colleagues. The luxury (far beyond the usual "he says, she says, they-say gossip") of the Internet leads us (both nearby and geographically distant associates: graduate students, family members, et al.) towards the answer to a key question about the grand sweep of the history of life in its biospheric environment on Planet Earth. (Note: of course our planet is mostly not earth, it ought to be renamed Planet Water or Planet Hard Rock.)

The Internet makes a difference as we zero in toward the final detailed solution of our scientific problem: "How did the ancestral nucleated cell evolve some 1000 million years ago?" (The cells of which all animals, plants, mushrooms and algae etc. are composed.) Everyone agrees this evolutionary turning point, the appearance of animal-type cells in the fossil record happened in the time period the geologists call the Proterozoic Eon)? How?

The short answer is nucleated cells evolved "by promiscuous forbidden sexual fusion among wildly different kinds of bacteria." Alas, our motley collection of fused bacterial ancestors never escaped from their "marriage contract". They survived and still live together with the ups-and-downs of permanent merger.

Probably some bacterial ancestors look back at the period 1000-600 million years ago when both water and air were full of hydrogen sulfide (poisonous to people). Before oxygen bubbled up and its combustion fueled the frenetic rate of environmental degradation that began in the Proterozoic eon and continues until today was "The Age of Bacteria", a calmer, quieter time. Aided and abetted by our very recent (Holocene) loud, careless, ignorant, frantic, clever but unwise, ephemeral human species, the rest of our planetmates have been there before us and will be there when we're gone.

The Internet pushes this notion farther, louder and of course with the velocity of light.



The dimensionality of the Internet has yet to be defined, and the principles outlining its space are constantly negotiated through our use of it. With its unique time-/space situation – the fact that it is possible to physically be in one place, and, simultaneously, have access to the entire world – the Internet can potentially have a huge impact on our understanding of our surroundings.

Ideally, the relation between user and network should one of mutual exchange: I co-produce the network through my involvement in it, and it co-produces me through the information, I get from it. But for this to happen, we have to make better use of the potentials of the Internet, and the Internet has to have an interest in this mutual exchange – it has to invest itself in its users, so to speak. In its current form, the Internet, the way I see it, has signed a contract with a Modernist, two-dimensional conception of space. The relation between it and its users is one of subject and object: I can see it as if it were an image, but I cannot feel it, I'm not present in it, the interaction between the medium and I is too weak.

Being a profoundly democratic medium, opening up unprecedented possibilities of self-expression, freedom of the press and access to information, the Internet is not only the source of unlimited access to knowledge, but paradoxically enough also the breeding ground of a general acceptance of a lack of competences. Large social communities such as Facebook, which do not produce or exchange any kind of knowledge, seem to flourish, and because search machines are based on trivial algorithmic principles of recognition, it can be hard to find the qualified, critical voices in the bulk of information.

If the Internet should help us become more consciously involved with the world, it is not enough to just canalise huge amounts of information into society. Search engines should be competence-focused, social networks should relate to competent search engines, and video and search functions should be better integrated. This requires that Google, Yahoo, AOL and the other large companies defining the future of the Internet, provide the medium with enough confidence to operate with self-criticism. The only self-criticism, the Internet is operating with at the moment seems to be the one of the market economy – the most efficient, frequently updated and trimmed sites being the ones where money is changing hands. This is not enough. We have to base our use of the Internet on both trust and scepticism.

In this way, the Internet would not stand outside reality and send information in, rather it would be conceived of as a part of reality, and thus the distinction between subject and object would dissolve, and we would experience the Internet as if it were a three-dimensional space. The Internet would become a reality producing machine.


Research Associate & Lecturer, Harvard; Adjunct Associate Professor, Brandeis; Author, Alex & Me


The Internet hasn't changed the way I think; it hasn't altered one whit the way in which I — that is, my brain—processes information…other than maybe by forcing me to figure out how to process a lot more of it. Consciously, I still use the same scientific training that was drummed into me as an undergraduate and graduate student in theoretical chemistry, even when it comes to evaluating aspects of my daily life: Based on a certain preliminary amount of information, I develop a hypothesis and try to refine it so that it differs from any competing equally plausible hypotheses; I test the hypothesis; if it is proven true, I rest my case within the limits of that hypothesis, accepting that I may have solved only one piece of a puzzle; if it is proven false, I revise and repeat the procedure.

Maybe the Internet has given me more things to think about, but that doesn't fundamentally change the way I think. Rather, what has changed, and is still changing, is my relationship with the Internet — from unabashed infatuation to disillusionment to a kind of armed truce. And, no, I'm not sidestepping the question, because until the Internet actually rewires my brain, it won't change my processing abilities. Of course, such rewiring may be in the offing, and quite possibly sooner than we expect, but that's not yet the case.

So, my changing love-hate relationship with the Internet.

First came the honeymoon phase — believing that nothing in the world could ever be as wondrous — an appreciation for all the incredible richness and simplicity that the Internet brought into my life. No longer did I have to trudge through winter's snow or summer's heat to a library at the other end of campus — or even come to campus — to acquire information, or to make connections to friends and colleagues all over the world.

Did I need to set up a symposium for an international congress? Just a few emails and all was complete. Did I need an obscure reference or that last bit of data for the next day's powerpoint presentation while in an airport lounge, whether in Berlin or Beijing, Sydney or Saltzburg? Ditto. Did I need a colleague's input on a tricky problem or to provide the same service myself? Ditto. Even when it came to forgetting a birthday or anniversary and needing to research and send a gift somewhere in the world? Ditto. A close friend and colleague moves to Australia? No problem staying in touch anymore. But did all this change the way I think? No. It may have changed the way I work, because what changed were various limitations on the types of information that were accessible within certain logistical boundaries, but my actual thought processes didn't alter.

Next came the disenchantment phase…the realization that more and faster were not always better. My relationship with the Internet began to feel oppressive, overly demanding of my time and energy. Just because I can be available and can work 24/7, 365 — must I?? The time saved and the efficiencies achieved began to backfire. I no longer had the luxury of recharging my brain by observing nature during that walk to the library, or by reading a novel while at that airport lounge.

Emails that supplanted telephone calls were sometimes misunderstood, because vocal modulations were missing. The number of requests to do X, Y, or Z began to increase exponentially, because, for example, it was far easier to shoot me a question than to spend the time digging up the answers — even on the Internet. The lit search I performed on the supposedly infinitely large data base failed to bring up that reference I needed and knew existed, because I read it a decade ago but didn't save it for my files because I figured I could always bring it up again.

This Internet relationship was supposed to enable all of my needs to be met; how did it instead become the source of endless demands? How did it end up draining away so much time and energy? The Internet seemed to have given me a case of Attention Deficit Disorder, but did it really change the way I think, or just made it more difficult have the time to think? Most likely the latter, because judicious use of the "off" button allowed a return to normalcy.

Which brings me to that armed truce — .an attempt to appreciate the positives and accept the negatives, to set personal boundaries and to refuse to let them be breached. Of course, maybe it is just this dogmatic approach that prevents the Internet from changing the way that I think.


An engineer, a physicist and a computer scientist go for a drive. Near the crest of a hill, the engine sputters and stops running.

"It must be the carburetor," says the engineer, opening his toolbox. "Let me see if I can find the problem."

"If we can just push it to the top of the hill, we'll be able to coast down by gravity and get to a garage," says the physicist.

"Wait a second," says the computer scientist. "Let's all get out of the car, shut the doors, open them again, get in, turn the ignition and see what happens."

I like programming, and when I do, I am often unable to stop because there is always one more easy thing you can try before you get up and stop, one more bug you can try to fix, one more attempt you can make to find the cause of a problem, one more shot at incrementally improving something. Because of the interactivity of programming – edit, compile, run, examine, repeat – you can always take a quick preliminary whack at something and see if it works. You can try a solution without understanding the problem completely.

If, as I do, you spend most of your day in front of a computer, then the Internet brings this endless micro-interactivity into your entire life by providing you with a willing co-respondent. It abhors a vacuum. It can fill up all your available time by breaking it up into smaller and smaller chunks. If you have even a split moment, you can reply to an email, check wikipedia, look at the weather, scan your horoscope, read a movie review, watch a video, suffer through an ad. All hurriedly.

One unmitigatedly good thing is the associative memory this facilitates. If you can't remember the name of the abstract expressionist you read about in an article fifteen years ago in the Times, an artist who used to live on Old Slip in New York in the Nineteen Fifties with his French actress then-wife who, you recall, was in Last Year in Marienbad, you can go to imdb, look up the movie, find her name, look her up on Wikipedia and discover that her husband was Jack Youngerman. When I do this a second time now for verification, I go off on a tangent and discover that she acted with Allan Ginsberg in Pull My Daisy. And that she is buried in Cimetière du Montparnasse, one of the more restful places to be buried, not far from where Hemingway used to drink and write at the …

But I digress.

Some people say the Internet has made us more efficient.

I waste many hours each day being efficient.

Efficiency should be a means, not an end.

The big question, as always, is: How shall I live?

The Internet hasn't changed the way I think about that.

What's changed the way I think about big things, as always, are the people I talk to and the books I read.

Johnstone Family Professor, Department of Psychology; Harvard University; Author, The Stuff of Thought


As someone who believes both in human nature and in timeless standards of logic and evidence, I'm skeptical of the common claim that the Internet is changing the way we think. Electronic media aren't going to revamp the brain's mechanisms of information processing, nor will they supersede modus ponens or Bayes' theorem. Claims that the Internet is changing human thought are propelled by a number of forces: the pressure on pundits to announce that this or that "changes everything"; a superficial conception of what "thinking" is that conflates content with process; the neophobic mindset that "if young people do something that I don't do, the culture is declining." But I don't think the claims stand up to scrutiny.

Has a generation of texters, surfers, and twitterers evolved the enviable ability to process multiple streams of novel information in parallel? Most cognitive psychologists doubt it, and recent studies by Clifford Nass confirm their skepticism. So-called mutlitaskers are like Woody Allen after he took a speed-reading course and devoured War and Peace in an evening. His summary: "It was about some Russians."

Also widely rumored are the students who cannot write a paper without instant-message abbreviations, emoticons, and dubious Web citations. But students indulge in such laziness to the extent that their teachers let them get away with it. I have never seen a paper of this kind, and a survey of university student papers by Andrea Lunsford shows they are mostly figments of the pundits' imaginations.

The way that intellectual standards constrain intellectual products is no more evident than in science. Scientists are voracious users of the Internet, and of other computer-based technologies that are supposedly making us stupid, like Powerpoint, electronic publishing, and email. Yet it would be ludicrous to suggest that scientists think differently than they did a decade ago, or that the progress of science has slowed.

The most interesting trend in the development of the Internet is not how it is changing people's ways of thinking but how it is adapting to the way that people think. The leap in Internet usage that accompanied the appearance of the World Wide Web more than a decade ago came from its user interface, the graphical browser, which worked around the serial, line-based processing of the actual computer hardware to simulate a familiar visual world of windows, icons, and buttons. The changes we are seeing more recently include even more natural interfaces (speech, language, manual manipulation), better emulation of human expertise (as in movie, book, or music recommendations, and more intelligent search), and the application of Web technologies to social and emotional purposes (such as social networking, sharing of pictures, music, and video) rather than just the traditional nerdy ones.

To be sure, many aspects of the life of the mind have been affected by the Internet. Our physical folders, mailboxes, bookshelves, spreadsheets, documents, media players, and so on have been replaced by software equivalents, which has altered our time budgets in countless ways. But to call it an alternation of "how we think" is, I think, an exaggeration.


Professor of Journalism, New York University; formerly journalist, Science magazine; Author, Sun in a Bottle: The Strange History of Fusion and the Science of Wishful Thinking


The process was so gradual, so natural, that I didn't notice it at first. In retrospect, it was happening to me long before the advent of the Internet. The earliest symptoms still mar the books in my library. Every dog-eared page represents a hole in my my memory. Instead of trying to memorize a passage in the book or remember an important statistic, I took an easier path, storing the location of the desirable memory instead of the memory itself. Every dog-ear is a meta-memory, a pointer to an idea that I wanted to retain but was too lazy to memorize.

The Internet turned an occasional habit into my primary way of storing knowledge. As the Web grew, my browsers began to bloat with bookmarked Websites, with sites that stored information that I deemed important but didn't feel obliged to commit to memory. And as search engines matured, I stopped bothering even with bookmarks; I soon relied upon Altavista, Hotbot, and then Google to help me find — and recall — ideas. My meta-memories, my pointers to ideas, started being replaced by meta-meta-memories, by pointers to pointers to data. Each day, my brain fills with these quasi-memories, with pointers and with pointers to pointers, each one a dusty IOU sitting where a fact or idea should reside.

Now, when I expend the effort to squirrel memories away, I store them in the clutter of my hard drive as much as I do in the labyrinth of my brain. As a result, I spend as much time organizing them, making sure I can retrieve them on demand, as I do collecting them. My memories are filed in folders within folders within folders, easily accessible — and searchable, in case my meta-memory of their location fails. And when a file becomes corrupt, all I am left with a pointer, a void where an idea should be, a ghost of a departed thought.


Neuroscientist, New York University; Author, Synaptic Self


A woman witnesses a crime and recounts it to a policeman. Months later she appears in court to testify. As her story unfolds, it begins to differ from the notes taken by the policeman. A journalist covering the case notices that her testimony includes things she could not have known at the time but that were later discovered and that appeared in his newspaper. Though intensely grilled by the DA, she sticks by her story.

Why did her memory change? Why didn't she know the difference between what she experienced and what she read in the paper? The short answer is that remembering is a dangerous affair in the life of a memory. A slightly longer answer requires that we delve into the mechanisms that store memories.

Memory formation occurs in stages. Initially, a temporary or short-term memory is formed. This memory is fragile and will dissipate unless it is converted into a long-term memory through protein synthesis inside the neurons that processed the experience. The new proteins stabilize the synaptic connections that constitute memory at the cellular level. If protein synthesis is disrupted in the hours following the experience, a long-term memory does not result. The conversion of short-term into long-term memory via protein synthesis is called consolidation.

It has also been found that disruption of protein synthesis after the remembrance of a fully consolidated long-term memory produces a loss of the memory. This is taken to mean that when memories are retrieved they have to be reconsolidated via protein synthesis in order to persist.

Reconsolidation is essentially an updating process. After consolidation, a memory remains unchanged until it is retrieved. At that point, the brain has the opportunity to incorporate new information into the memory, things that have been learned since the memory was stored initially. I haven't thought about the Edge Annual Question since last year, but now that I have been forced to remember it, my memory of it includes the new question.

So far so good. But considerable research now suggests that reconsolidation can overwrite previous memories. That is, the old memory is eliminated and the new one involves a collage of old and new information. This integration process determines what we will remember the next time. When our witness read the newspaper account, the old memory was retrieved and new information was integrated with the old information. She was unable to tell the difference between what she experienced and what she later learned because it was now one memory. Laboratory studies in fact show that people are not very good at remembering what they actually experienced, and often make mistakes that involve the insertion of new information into a memory.

The bottom line of reconsolidation research is that your memory of some experience is only as good as your last recollection of the experience. Each use of a memory changes the memory. Obviously, the changes are not always so dramatic as what I have described. But the fact is that memory can, at least to some extent, be changed by experience, and sometimes the changes can be striking.

There a number of practical implications of this research. One is that it might be possible to relieve emotional stress by having people remember their stressful experiences and then interfering with reconsolidation. This is pretty much what happened to Jim Carey's character in The Eternal Sunshine of the Spotless Mind. But there is also evidence that it works in real life situations with trauma victims. Studies in rats also suggest that this same approach can be used to reduce the ability of drug-related cues to produce relapse.

Memory works pretty well most of the time. But we should be careful as a society when we make significant decisions on the basis of one person's memory. The only way a memory remains "pure" and resistant to change is by never being used. The most accurate memories are indeed the ones never remembered. Be careful about what you remember.

Neuroscientist; Collège de France, Paris; Author, Reading in the brain


Like the Gutenberg press in its time, the Internet is revolutionizing our access to knowledge and the world we live in. Few people, however, pay attention to a fundamental aspect of this change: the shift in our notion of time. Human life used to be organized in inflexible day-and-night cycles — a quiet routine that has become radically disrupted, for good or for worse.

Some years ago, I was working out of Paris with colleagues in Harvard on the mathematical mind of Amazon Indians. The project was so exciting, and we were so motivated by the paper we were writing, that we worked on it every day, if not day and night (we had families and friends…).

At the end of each day, I would send my colleagues a new draft of our article, full of detailed questions and issues that needed to be addressed. In a world without Internet, I would have had to wait several weeks for a reply. Geographically dispersed and collective work used to be slower than individual thought. Not so in today's world. Every morning, after a good night's sleep, I woke up to find that most of my questions had been answered during the night, as if by magic. The experience reminded me of the mysterious instances of non-conscious problem solving during sleep, as famously reported by Kekulé, Poincaré, Hadamard and other mathematicians and scientists. The difference, of course, was that my problems were solved thanks to conscious effort and the pooling together of several minds around the planet.

For my Harvard colleagues too, the experience felt somewhat miraculous. They too had many questions, and I dutifully computed the statistics they requested, drew the new data plots they asked for and wrote the paragraphs they needed — all this while Harvard was still plunged into the night. Thanks to this collective effort, our work was completed much faster than any one of us could have managed alone. We had almost doubled the speed of our mental clocks!

The idea is now common place. A great many companies outsource translation or maintenance to Indian, Australian or Taiwanese employees on the other side of the world, so that the work can be completed overnight. However, the entire scope of this phenomenon does not yet appear to have fully dawned on us.

For the sake of example, imagine an international corporation, say a movie studio like Pixar, intentionally placing three of its computing centers at the vertices of a giant equilateral triangle spanning the earth, so that the employees at a given location can work on a project for 8 daylight hours and then pass it on to another team in a different time zone.

For a more grandiose picture, one that could have arisen from Jorge Borges' mind, imagine a complex Problem that moves around the planet via Internet, at a fixed speed precisely countering the earth's rotation, in such a way that the Problem itself constantly faces the sun. As dawn rises for a fraction of humanity, the Problem is already present on their computer screens — but some of it has been chipped away by armies of fellow workers who, by this time, are sound asleep. Day and night, without interruption, the earth's rotation cranks away at the Problem until it is solved.

But such giant Utopian or Borgesian projects do, in fact, already exist — they are called Wikipedia, Linux, SourceForge or OLPC (one laptop per child). They are beyond the scope, or even the imagination, of any single human being. Nowadays, open source development literally moves around in the infosphere and is being improved constantly on whatever side of the planet happens to be in sunshine (and often on the other side as well).

There is grandeur in this new way of computer life, where the normal sleep-wake cycle is replaced by the constant churning of silicon and mind. But there is much inherent danger in it as well. Take a look at Amazon's aptly named "mechanical turk", and you'll find an alternative Web site where largely profitable enterprises, in developed countries, offer short-term, badly paid computer jobs to the third-world's poor. For a few pennies, they propose a number of thankless assignments ironically called "human intelligence tasks" that require completing forms, categorizing images or typing handwritten notes — anything that computers still cannot do. They provide no benefits, no contract, no guarantees, and ask no questions: the dark side of the intellectual globalization now made possible by the Internet.

As our mental clocks keep on accelerating, and we become increasingly impatient about our unfinished work, the Internet provides our society with a choice that deserves reflection: do we aim for ever faster intellectual collaboration? Or for ever faster exploitation that will allow us to get good night's sleep while others do the dirty work? With the Internet, a new sense of time is dawning, but our basic political options remain essentially unchanged.


Associate Professor of Physics, University of California, Santa Cruz


Recently, I wanted to learn about 12th century China — not need a deep or scholarly understanding, just enough to add a bit of not-wrong color to something I was writing. Wikipedia was perfect! More regularly, my astrophysics and cosmology endeavors bring me to databases such as the ArXiv, ADS, and SPIRES, which give instant and organized access to all of the articles and information I might need to research and write.

Between such uses, and an appreciable fraction of my time spent processing emails, I, like most of my colleagues, spend a lot of time connected to the Internet. It is a central tool in my research life. Yet when I think of what I do that is most valuable — to me at least — it is the occasional generation of genuine creative insights into the world. And looking at some of those insights, I realized that essentially none of them have happened in connection with the Internet.

Given the quantity of information and understanding I imbibe online, this seems strange, and because the Internet is so omnipresent, also worrisome. Insight is surely like happiness and money: you'll get a certain amount through a combination of hope, luck, and effort. But really maximizing it takes a more deliberate approach of paying careful attention to the things that increase or decrease it, and making judicious decisions on that basis.

In this spirit I undertook a short exercise. Looking back, I identified ten ideas or insights that were important to me, and for which I could remember the context in which they arose. By my tally, two were during conversation; one while listening to a talk; one while walking; two while sitting at a desk researching and thinking; and four while writing. Again, zero occurred while browsing the Web, reading online articles, emailing, etc. This raises two obvious questions: why does the Web seem to be the enemy of insight, and what, if anything, should I do about it?

After examining my list, several possibilities come to mind in answer to the first question. One is that the speed of information input from the Internet is simply too fast, leaving little mental space/time to process that information, fit it into existing schema, and think through the implications. This is not a fault of the Internet per se. But the Internet, by dint of it sheer volume of information, generally short treatments of individual topics, and powerful search capabilities, strongly encourages overly-quick information inhalation. Most talks or lectures, in contrast, have the dubious virtues of being wildly inefficient as information transmission, and of containing chunks either boring or unintelligible enough to give one's mind some space to think.

A second possible problem is that in general, communication with the Web is just about as one-way as reading a book. My insight 'tally' clearly favors active, laborious construction of a train of thought or argument. While this is almost self-evident, it is too easy to pretend that finding just the right thing to read will yield a fabulous, and essentially effortless, new understanding. It would seem not.

A third possibility relates to the type of thinking that the Internet encourages. The ability to instantly access information is wonderful for spinning a Web of interconnections between ideas and pieces of data. Yet for deep understanding, in particular the type that arises from the careful following of one particular thread of thought, the Internet is not very helpful: I often find the Web's role is more to temp me off of the path into some side vista (or thicket) than to aid the journey.

Finally, but perhaps most crucially, my experience is that real, creative, insights or breakthroughs require prolonged and concentrated time in the 'wilderness.' There are lots of things I don't know, but personally I start to get excited when I uncover something that I don't know because it really is mysterious. I've come think that it is important to cultivate a 'don't know' mind: one that perceives a real and interesting enigma, and is willing to dwell in that perplexity and confusion. A sense of playful delight in that confusion, and a willingness to make mistakes — many mistakes — while floundering about, is a key part of what makes insight possible for me. And the Internet? The Internet does not like this sort of mind. The Internet wants us to know, and it wants us to know RIGHT NOW: its essential structure is to produce knowing on demand. I don't just worry that the Internet goads us to trade understanding for information (it surely does), but that it makes us too accustomed to to instant informational gratification. Its bright light deprives us of spending any time in the fertile mystery of the dark.

Others might, of course, have quite different experiences of the causes and conditions of insight, and also of the Internet. But I'd bet that my experiences with both are not uncommon. So what should be done? A first reaction — to largely banish the Internet from my intellectual life — feels both difficult (like most I am at least a low-level addict) and counterproductive: information is, after all, crucially important, and the Internet is a unsurpassable tool for discovering and assembling it.

But the exercise suggests to me that this tool should be used in its own rightful place and time, and with a bit more of a separation from the creative acts of thinking, deeply conversing, working through ideas, or writing. That is, it may be better to think of the Internet not as an extra bit of our brain, but as a library: somewhere we occasionally go to gather raw materials that we can take away, somewhere else, where we have time and space to be bored, to be forced into non-distraction, and to be bewildered, so that we can create an opportunity for something really interesting to happen.

Playwright & Director; Founder, The Ontological-Hysteric Theater


How is the Internet changing the way I think? But what is it — this doing "thinking" that I assume I do along with everybody else? Probably there is no agreement about what this "thinking" consists of. But I certainly do not believe "gathering information" is thinking — and that has obviously been an activity that has expanded and sped up as a result of the Internet. But for me — to "think" is to withdraw from gathered information into a blankness within which something arises — pops out — is born.

Of course it will be maintained that what "pops" out may have its roots, may be conditioned, by many factors in my experiential past. But nevertheless — while the Internet swamps us in "connectedness" and "fact" — it is only in the withdrawal from those I claim a space for thinking.

So in one sense, the Internet expands the arena within which thinking may resonate, and so perhaps the thinking is thereby "attuned" somewhat differently. But I must admit to being one of those who believes that while it is clearly "life-changing" — it is no way, if you will — "soul-changing" Accessing the ever expanding, ever faster Internet means a life that is changing as it becomes the life of a surfer (just as life might change if one moved to a California beach community) — one becomes more and more agile balancing on top of the flow, leaping from hyper-link to hyper-link — giving one's mental "environment" a certain shape based on those chosen jumps.

But the Internet sweeps you away from where and "WHAT" you were — so instead of filling you with the fire to dig deeper into the magic bottomless source that is the self — it lets you drift into the dazed state of having everything at your finger-tips — which are used to caress the world of course, but only the world as it assumes the shape of the now-manifest rather than the world of the still un-imaginable.

So even though I myself do spend LOTS of time on the Internet — (fallen, "Pancake Person" that I am) I can't help being reminded of the Greek philosopher who attributed his long life to avoiding dinner parties. (If only I could avoid the equally distracting Internet which, in it's promise of connectedness and expanded knowledge is really a substitute social phenomenon).

The "entire world" that the Internet seems to offer harmonized strangely with the apple offered to Eve from the Tree of Knowledge — ah, we don't believe in those old myths? (I guess one company guru did).

Well, the only hope I see hovering in the never-never land (now real) where the Internet does it's work of feeding smart people amphetamines and "dumb" people tranquilizers — the only hope is that the expanding puddle of boiling, bubbling hot milk will eventually COAGULATE and a new unforeseen pattern will emerge out of all that activity that thought it was aiming at a certain goal. but (as is usual with life) was really headed someplace else nobody knew about.

That makes it sound like the new mysticism for a new Dark Ages. Well, we've already bitten the Apple. Good luck to those much younger than me who may be around to see either the new Heaven or the new Hell.


| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |

next >