Edge 250July 15, 2008
(9,670 words)


THE THIRD CULTURE


ENGINEERS' DREAMS

By George Dyson

THE NEXT RENAISSANCE
A Talk by Douglas Rushkoff

THE REALITY CLUB

ON "IS GOOGLE MAKING US STUPID"
By Nicholas Carr
W. Daniel Hillis, Kevin Kelly, Larry Sanger, George Dyson, Jaron Lanier, Douglas Rushkoff

IN THE NEWS
THE GUARDIAN

From Obama to Cameron, why do so many politicians want a piece of Richard Thaler? By Aditya Chakrabortty

THE NEW YORKER
Surfing the Universe
By Benjamin Wallace-Wells

THE TIMES
Why Barack Obama and David Cameron are keen to 'nudge you'
By Carol Lewis

THE BUSINESS TIMES (SINGAPORE)
Psychology's Ambassador to Economics
Daniel Kahneman talks to Vikram Khanna

THE WASHINGTON POST
Jason Calacanis' First New Email Post
By Nik Cubrilovic

NEW SCIENTIST
A Way With Words
By Jo Marchant

HIGHFIELD NAMED EDITOR OF NEW SCIENTIST


Only one third of a search engine is devoted to fulfilling search requests. The other two thirds are divided between crawling (sending a host of single-minded digital organisms out to gather information) and indexing (building data structures from the results). Ed's job was to balance the resulting loads.

When Ed examined the traffic, he realized that Google was doing more than mapping the digital universe. Google doesn't merely link or point to data. It moves data around. Data that are associated frequently by search requests are locally replicated—establishing physical proximity, in the real universe, that is manifested computationally as proximity in time. Google was more than a map. Google was becoming something else. ...

ENGINEERS' DREAMS By George Dyson

Introduction by Stewart Brand

How does one come to a new understanding? The standard essay or paper makes a discursive argument, decorated with analogies, to persuade the reader to arrive at the new insight.

The same thing can be accomplished—perhaps more agreeably, perhaps more persuasively—with a piece of fiction that shows what would drive a character to come to the new understanding. Tell us a story!

This George Dyson gem couldn't find a publisher in a fiction venue because it's too technical, and technical publications (including Wired) won't run it because it's fiction. Shame on them. Edge to the rescue.

SBB

GEORGE DYSON, a historian among futurists, is the author Baidarka; Project Orion; and Darwin Among the Machines.

George Dyson's Edge Bio Page


ENGINEERS' DREAMS

[Note: although the following story is fiction, all quotations
have been reproduced exactly from historical documents that exist.]

Ed was old enough to remember his first transistor radio—a Zenith Royal 500—back when seven transistors attracted attention at the beach. Soon the Japanese showed up, doing better (and smaller) with six.

By the time Ed turned 65, fifteen billion transistors per second were being produced. Now 68, he had been lured out of retirement when the bidding wars for young engineers (and between them for houses) prompted Google to begin looking for old-timers who already had seven-figure mid-peninsula roofs over their heads and did not require stock options to show up for work. Bits are bits, gates are gates, and logic is logic. A systems engineer from the 1960s was right at home in the bowels of a server farm in Mountain View.

In 1958, fresh out of the Navy, Ed had been assigned to the System Development Corporation in Santa Monica to work on SAGE, the hemispheric air defense network that was completed just as the switch from bombers to missiles made it obsolete. Some two dozen air defense sector command centers, each based around an AN-FSQ-7 (Army Navy Fixed Special eQuipment) computer built by IBM, were housed in windowless buildings armored by six feet of blast-resistant concrete. Fifty-eight thousand vacuum tubes, 170,000 diodes, 3,000 miles of wiring, 500 tons of air-conditioning equipment and a 3000-kilowatt power supply were divided between two identical processors, one running the active system and the other standing by as a “warm” backup, running diagnostic routines. One hundred Air Force officers and personnel were stationed at each command center, trained to follow a pre-rehearsed game plan in the event of enemy attack. Artificial intelligence? The sooner the better, Ed hoped. Only the collective intelligence of computers could save us from the weapons they had helped us to invent.

In 1960, Ed attended a series of meetings with Julian Bigelow, the legendary engineer who had collaborated with Norbert Wiener on anti-aircraft fire control during World War II and with John von Neumann afterwards—developing the first 32 x 32 x 40-bit matrix of random-access memory and the logical architecture that has descended to all computers since. Random-access memory gave machines access to numbers—and gave numbers access to machines.

Bigelow was visiting at RAND and UCLA, where von Neumann (preceded by engineers Gerald Estrin, Jack Rosenberg, and Willis Ware) had been planning to build a new laboratory before cancer brought his trajectory to a halt. Copies of the machine they had built together in Princeton had proliferated as explosively as the Monte Carlo simulations of chain-reacting neutrons hosted by the original 5-kilobyte prototype in 1951. Bigelow, who never expected the design compromises he made in 1946 to survive for sixty years, questioned the central dogma of digital computing: that without programmers, computers cannot compute. He viewed processors as organisms that digest code and produce results, consuming instructions so fast that iterative, recursive processes are the only way that humans are able to generate instructions fast enough to keep up. "Highly recursive, conditional and repetitive routines are used because they are notationally efficient (but not necessarily unique) as descriptions of underlying processes," he explained. Strictly sequential processing, and strictly numerical addressing impose severe restrictions on the abilities of computers, and Bigelow speculated from the very beginning about "the possibility of causing various elementary pieces of information situated in the cells of a large array (say, of memory) to enter into a computation process without explicitly generating a coordinate address in 'machine-space' for selecting them out of the array."

At Google, Bigelow's vision was being brought to life. The von Neumann universe was becoming a non-von Neumann universe. Turing machines were being assembled into something that was not a Turing machine. In biology, the instructions say "Do this with that" (without specifying where or when the next available copy of a particular molecule is expected to be found) or "Connect this to that" (without specifying a numerical address). Technology was finally catching up. Here, at last, was the long-awaited revolt against the intolerance of the numerical address matrix and central clock cycle for error and ambiguity in specifying where and when.

The advent of template-based addressing would unleash entirely new forms of digital organisms, beginning with simple and semi-autonomous coded structures, on the level of nucleotides bringing amino acids (or template-based AdWords revenue) back to a collective nest. The search for answers to questions of interest to human beings was only one step along the way.

Google was inverting the von Neumann matrix—by coaxing the matrix into inverting itself. Von Neumann's "Numerical Inverting of Matrices of High Order," published (with Herman Goldstine) in 1947, confirmed his ambition to build a machine that could invert matrices of non-trivial size. A 1950 postscript, "Matrix Inversion by a Monte Carlo Method," describes how a statistical, random-walk procedure credited to von Neumann and Stan Ulam "can be used to invert a class of n-th order matrices with only n2 arithmetic operations in addition to the scanning and discriminating required to play the solitaire game." The aggregate of all our searches for unpredictable (but meaningful) strings of bits, is, in effect, a Monte Carlo process for inverting the matrix that constitutes the World Wide Web.

Ed developed a rapport with the machines that escaped those who had never felt the warmth of a vacuum tube or the texture of a core memory plane. Within three months he was not only troubleshooting the misbehavior of individual data centers, but examining how the archipelago of data centers cooperated—and competed—on a global scale.

In the digital universe that greeted the launch of Google, 99 percent of processing cycles were going to waste. The global computer, for all its powers, was perhaps the least efficient machine that humans had ever built. There was a thin veneer of instructions, and then there was this dark, empty, 99 percent.

What brought Ed to the attention of Google was that he had been in on something referred to as "Mach 9." In the late 1990's, a web of optical fiber had engulfed the world. At peak deployment, in 2000, fiber was being rolled out, globally, at 7,000 miles per hour, or nine times the speed of sound. Mach 9. All the people in the world, talking at once, could never light up all this fiber. But those 15 billion transistors being added every second could. Google had been buying up dark fiber at pennies on the dollar and bringing in those, like Ed, who understood the high-speed optical switching required to connect dark processors to dark fiber. Metazoan codes would do the rest.

As he surveyed the Google Archipelago, Ed was reminded of some handwritten notes that Julian Bigelow had showed him, summarizing a conversation between Stan Ulam and John von Neumann on a bench in Central Park in early November 1952. Ulam and von Neumann had met in secret to discuss the 10-megaton Mike shot, whose detonation at Eniwetok on November 1 would be kept embargoed from the public until 1953. Mike ushered in not only the age of thermonuclear weapons but the age of digital computers, confirming the inaugural calculation that had run on the Princeton machine for a full six weeks. The conversation soon turned from the end of one world to the dawning of the next.

“Given is an actually infinite system of points (the actual infinity is worth stressing because nothing will make sense on a finite no matter how large model),” noted Ulam, who then sketched how he and von Neumann had hypothesized the evolution of Turing-complete universal cellular automata within a digital universe of communicating memory cells. For von Neumann to remain interested, the definitions had to be mathematically precise: “A ‘universal’ automaton is a finite system which given an arbitrary logical proposition in form of (a linear set L) tape attached to it, at say specified points, will produce the true or false answer. (Universal ought to have relative sense: with reference to a class of problems it can decide). The ‘arbitrary’ means really in a class of propositions like Turing's—or smaller or bigger.”

“An organism (any reason to be afraid of this term yet?) is a universal automaton which produces other automata like it in space which is inert or only ‘randomly activated’ around it,” Ulam’s notes continued. “This ‘universality’ is probably necessary to organize or resist organization by other automata?” he asked, parenthetically, before outlining a mathematical formulation of the evolution of such organisms into metazoan forms. In the end he acknowledged that a stochastic, rather than deterministic, model might have to be invoked, which, “unfortunately, would have to involve an enormous amount of probabilistic superstructure to the outlined theory. I think it should probably be omitted unless it involves the crux of the generation and evolution problem—which it might?”

The universal machines now proliferating fastest in the digital universe are virtual machines—not simply Turing machines, but Turing-Ulam machines. They exist as precisely-defined entities in the Von Neumann universe, but have no fixed existence in ours. Sure, thought Ed, they are merely doing the low-level digital housekeeping that does not require dedicated physical machines. But Ed knew this was the beginning of something big. Google (both directly and indirectly) was breeding huge numbers of Turing-Ulam machines. They were proliferating so fast that real machines were having trouble keeping up.

Only one third of a search engine is devoted to fulfilling search requests. The other two thirds are divided between crawling (sending a host of single-minded digital organisms out to gather information) and indexing (building data structures from the results). Ed's job was to balance the resulting loads.

When Ed examined the traffic, he realized that Google was doing more than mapping the digital universe. Google doesn't merely link or point to data. It moves data around. Data that are associated frequently by search requests are locally replicated—establishing physical proximity, in the real universe, that is manifested computationally as proximity in time. Google was more than a map. Google was becoming something else.

In the seclusion of the server room, Ed's thoughts drifted back to the second floor communications center that linked the two hemispheres of SAGE's ANFSQ7 brain. "Are you awake? Yes, now go back to sleep!" was repeated over and over, just to verify that the system was on the alert.

SAGE's one million lines of code were near the limit of a system whose behavior could be predicted from one cycle to the next. Ed was reminded of cybernetician W. Ross Ashby's "Law of Requisite Variety": that any effective control system has to be as complex as the system it controls. This was the paradox of artificial intelligence: any system simple enough to be understandable will not be complicated enough to behave intelligently; and any system complicated enough to behave intelligently will not be simple enough to understand. Some held out hope that the path to artificial intelligence could be found through the human brain: trace the pattern of connections into a large enough computer, and you would end up re-creating mind.

Alan Turing's suggestion, to build a disorganized machine with the curiosity of a child, made more sense. Eventually, "interference would no longer be necessary, and the machine would have ‘grown up’." This was Google's approach. Harvest all the data in the world, rendering all available answers accessible to all possible questions, and then reinforce the meaningful associations while letting the meaningless ones die out. Since, by diagonal argument in the scale of possible infinities, there will always be more questions than answers, it is better to start by collecting the answers, and then find the questions, rather than the other way around.

And why trace the connections in the brain of one individual when you can trace the connections in the mind of the entire species at once? Are we searching Google, or is Google searching us?

Google's data centers—windowless, but without the blast protection—were the direct descendants of SAGE. It wasn't just the hum of air conditioning and warm racks of electronics that reminded Ed of 1958. The problem Ed faced was similar—how to balance the side that was awake with the side that was asleep. For SAGE, this was simple—the two hemispheres were on opposite sides of the same building—whereas Google's hemispheres were unevenly distributed from moment to moment throughout a network that spanned the globe.

Nobody understood this, not even Ed. The connections between data centers were so adaptable that you could not predict, from one moment to the next, whether a given part of the Googleverse was "asleep" or "awake." More computation was occurring while "asleep," since the system was free to run at its own self-optimizing pace rather that wait for outside search requests.

Unstable oscillations had begun appearing, and occasionally triggered overload alerts. Responding to the alarms, Ed finally did what any engineer of his generation would do: he went home, got a good night's sleep, and brought his old Tektronix oscilloscope with him to work.

He descended into one of the basement switching centers and started tapping into the switching nodes. In the digital age, everything had gone to megahertz, and now gigahertz, and the analog oscilloscope had been left behind. But if you had an odd wave-form that needed puzzling over, this was the tool to use.

What if analog was not really over? What if the digital matrix had now become the substrate upon which new, analog structures were starting to grow? Pulse-frequency coding, whether in a nervous system or a probabilistic search-engine, is based on statistical accounting for what connects where, and how frequently connections are made between given points. PageRank for neurons is one way to describe the working architecture of the brain. As von Neumann explained in 1948: "A new, essentially logical, theory is called for in order to understand high-complication automata and, in particular, the central nervous system. It may be, however, that in this process logic will have to undergo a pseudomorphosis to neurology to a much greater extent than the reverse." Ulam had summed it up: “What makes you so sure that mathematical logic corresponds to the way we think?”

As Ed traced the low-frequency harmonic oscillations that reverberated below the digital horizon, he lost track of time. He realized he was hungry and made his way upstairs. The oscilloscope traces had left ghosts in his vision, like the image that lingers for a few seconds when a cathode-ray tube is shut down. As he sat down to a bowl of noodles in the cafeteria, he realized that he had seen these 13-hertz cycles, clear off the scale of anything in the digital world, before.

It was 1965, and he had been assigned, under a contract with Stanford University, to physiologist William C. Dement, who was setting up a lab to do sleep research. Dement, who had been in on the discovery of what became known as REM sleep, was investigating newborn infants, who spend much of their time in dreaming sleep. Dement hypothesized that dreaming was an essential step in the initialization of the brain. Eventually, if all goes well, awareness of reality evolves from the internal dream—a state we periodically return to during sleep. Ed had helped with setting up Dement's lab, and had spent many late nights getting the electroencephalographs fine-tuned. He had lost track of Bill Dement over the years. But he remembered the title of the article in SCIENCE that Dement had sent to him, inscribed "to Ed, with thanks from Bill." It was "Ontogenetic Development of the Human Sleep-Dream Cycle. The prime role of ‘dreaming sleep’ in early life may be in the development of the central nervous system."

Ed cleared his tray and walked outside. In a few minutes he was at the edge of the Google campus, and kept walking, in the dark, towards Moffett Field. He tried not to think. As he started walking back, into the pre-dawn twilight, with the hint of a breeze bringing the scent of the unseen salt marshes to the east, he looked up at the sky, trying to clear the details of the network traffic logs and the oscilloscope traces from his mind.

For 400 years, we have been waiting for machines to begin to think.

"We've been asking the wrong question," he whispered under his breath.

They would start to dream first.


Computers and networks finally offer us the ability to write. And we do write with them. Everyone is a blogger, now. Citizen bloggers and YouTubers who believe we have now embraced a new "personal" democracy. Personal, because we can sit safely at home with our laptops and type our way to freedom.

But writing is not the capability being offered us by these tools at all. The capability is programming—which almost none of us really know how to do. We simply use the programs that have been made for us, and enter our blog text in the appropriate box on the screen. Nothing against the strides made by citizen bloggers and journalists, but big deal. Let them eat blog.

THE NEXT RENAISSANCE
A Tallk By Douglas Rushkoff

Introduction

"The Next Renaissance" is Douglas Rushkoff's keynote address at Personal Democracy Forum 2008 (PDF) took place June 23-24 in New York City, at Frederick P. Rose Hall, the home of Jazz at Lincoln Center.

PDF, which is run by Andrew Rasiej and Micah Sifry, tracks how presidential candidates are using the web, and vice versa, how content generated by voters is affecting the campaign. According to the organizers: "The 2008 election will be the first where the Internet will play a central role, not only in terms of how the campaigns use technology, but also in how voter-generated content affects its course." This is the first of several PDF presentations which Edge will run this summer.

JB

DOUGLAS RUSHKOFF is an author, lecturer, and social theorist. His books include Cyberia: Life in the Trenches of Hyperspace, Media Virus!, and Coercion: Why We Listen to What "They" Say.

Douglas Rushkoff's Edge Bio Page


THE NEXT RENAISSANCE



To me, "Personal Democracy" is an oxymoron. Democracy may be a lot of things, but the last thing it should be is "personal." I understand "personal responsibility," such as a family having a recycling bin in which they put their glass and metal every week. But even then, a single recycling bin for a whole building or block would be more efficient and appropriate.

Democracy is not personal, because if it's about anything, it's not about the individual. Democracy is about others. It's about transcending the self and acting collectively. Democracy is people, participating together to make the world a better place.

One of the essays in this conference's proceedings—the book "Rebooting Democracy"— remarks snarkily, "It's the network, stupid." That may go over well with all of us digital folks, but it's not true. It's not the network at all; it's the people. The network is the tool—the new medium that might help us get over the bias of our broadcasting technologies. All those technologies that keep us focused on ourselves as individuals, and away from our reality as a collective.

This focus on the individual, and its false equation with democracy, began back in the Renaissance. The Renaissance brought us wonderful innovations, such as perspective painting, scientific observation, and the printing press. But each of these innovations defined and celebrated individuality. Perspective painting celebrates the perspective of an individual on a scene. Scientific method showed how the real observations of an individual promote rational thought. The printing press gave individuals the opportunity to read, alone, and cogitate. Individuals formed perspectives, made observations, and formed opinions.

The individual we think of today was actually born in the Renaissance. The Vesuvian Man, Da Vinci's great drawing of a man in a perfect square and circle—independent and self-sufficient. This is the Renaissance ideal.

It was the birth of this thinking, individuated person that led to the ethos underlying the Enlightenment. Once we understood ourselves as individuals, we understood ourselves as having rights. The Rights of Man. A right to property. The right to personal freedom.

The Enlightenment—for all its greatness—was still oh, so personal in its conception. The reader alone in his study, contemplating how his vote matters. One man, one vote. We fight revolutions for our individual rights as we understood them. There were mass actions, but these were masses of individuals, fighting for their personal freedoms.

Ironically, with each leap towards individuality there was a corresponding increase in the power of central authorities. Remember, the Renaissance also brought us centralized currencies, chartered corporations, and nation states. As individuals become concerned with their personal plights, their former power as a collective moves to central authorities. Local currencies, investments, and civic institutions dissolve as self-interest increases. The authority associated with them moves to the center and away from all those voting people.

The media of the Renaissance—the printing press—is likewise terrific at myth-making. At branding. Its stories are told to individuals, either through books, or through broadcast media directed at each and every one of us. Its appeals are to self and self-interest.

Consider any commercial for blue jeans. Its target audience is not a confident person who already has a girlfriend. The commercial communicates, "wear these jeans, and you'll get to have sex." Who is the target for that message? An isolated, alienated person who does not have sex. The messaging targets the individual. If it's a mass medium, it targets many many individuals.

Movements, like myths and brands, depend on this quality of top-down, Renaissance-style media. They are not genuinely collective at all, in that there's no promotion of interaction between the people in them. Instead, all the individuals relate to the hero, ideal, or mythology at the top. Movements are abstract—they have to be. They hover above the group, directing all attention towards themselves.

As I listen to people talk here—well-meaning progressives, no doubt—I can't help but hear the romantic, almost desperate desire to become part of a movement. To become part of something famous, like the Obama campaign. Maybe even get a good K-street job out of the connections we make here. It's a fantasy perpetrated by the TV show West Wing. A myth that we want to be part of. But like any myth, it is a fantasy—and one almost entirely prefigured by Renaissance individualism.

The next renaissance (if there is one)—the phenomenon we're talking about or at least around here is not about the individual at all, but about the networked group. The possibility for collective action. The technologies we're using—the biases of these media—cede central authority to decentralized groups. Instead of moving power to the center, they tend to move power to the edges. Instead of creating value from the center—like a centrally issued currency—the network creates value from the periphery.

This means the way to participate is not simply to subscribe to an abstract, already-written myth, but to do real things. To take small actions in real ways. The glory is not in the belief system or the movement, but in the doing. It's not about getting someone elected, it's about removing the obstacles to real people doing what they need to to get the job done. That's the opportunity of the networked, open source era: to drop out of the myths and actually do.

Sadly, we tend to miss the great opportunities offered us by major shifts in media.

The first great renaissance in media, the invention of the alphabet, offered a tremendous leap for participatory democracy. Only priests could read and write hieroglyphs. The invention of the alphabet opened the possibility for people to read or even possibly write, themselves. In Torah myth, Moses goes off with his father-in-law to write the laws by which an enslaved people could now live. Instead of simply accepting legislation and government as a pre-existing condition—the God Pharaoh—people would develop and write down the law as they wanted it. Even the Torah is written in the form of a contract, and God creates the world with a word.

Access to language was to change a world of blind, enslaved rule followers into a civilization of literate people. (This is what is meant when God tells Abraham "you will be a nation of priests." It means they are to be a nation of people who transcend heiro-glyphs or "priestly-writing" to become literate.)

But this isn't what happened. People didn't read Torah—they listened as their leaders read it to them. Hearing was a step up from simply following, but the promise of the new medium had not been seized.

Likewise, the invention of the printing press did not lead to a civilization of writers—it developed a culture of readers. Gentlemen sat reading books, while the printing presses were accessed by those with the money or power to use them. The people remained one step behind the technology. Broadcast radio and television are really just an extension of the printing press: expensive, one-to-many media that promote the mass distribution of the stories and ideas of a small elite.

Computers and networks finally offer us the ability to write. And we do write with them. Everyone is a blogger, now. Citizen bloggers and YouTubers who believe we have now embraced a new "personal" democracy. Personal, because we can sit safely at home with our laptops and type our way to freedom.

But writing is not the capability being offered us by these tools at all. The capability is programming—which almost none of us really know how to do. We simply use the programs that have been made for us, and enter our blog text in the appropriate box on the screen. Nothing against the strides made by citizen bloggers and journalists, but big deal. Let them eat blog.

At the very least on a metaphorical level, the opportunity here is not to write about politics or—more likely—comment on what someone else has said about politics. The opportunity, however, is to rewrite the very rules by which democracy is implemented. The opportunity of a renaissance in programming is to reconfigure the process through which democracy occurs.

If Obama is indeed elected—the first truly Internet-enabled candidate—we should take him at his word. He does not offer himself as the agent of change, but as an advocate of the change that could be enacted by people. It is not for government to create solar power, for example, but to get out of the way of all those people who are ready to implement solar power, themselves. Responding to the willingness of people to act, he can remove regulations developed on behalf of the oil industry to restrict its proliferation.

In an era when people have the ability to reprogram their reality, the job of leaders is to help facilitate this activity by tweaking legislation, or by supporting their efforts through better incentives or access to the necessary tools and capital. Change does not come from the top—but from the periphery. Not from a leader or a myth inspiring individuals to consent to it, but from people working to manifest it together.

Open Source Democracy—which I wrote about a decade ago—is not simply a way to get candidates elected to office. It is a collective reprogramming of the social software, a disengagement from the myths through which we abdicate responsibility, and a reclamation of our role as citizens who participate in the creation of the society in which we want to live.

This is not personal democracy at all, but a collective and participatory democracy where we finally accept our roles as the fully literate and engaged adults who can make this happen.

[Postscript: At the conference's closing ceremony Personal Democracy Forum founder Andrew Rasiej announced he would be changing the name of the conference to the Participatory Democracy Forum.]

PERMALINK



[The July/August issue of Atlantic Monthly features a cover story by Nicholas Carr: "Is Google Making Us Stupid: What The Internet is doing to Our Brains". Carr is author of the recently published The Big Switch: Rewiring the world, from Edison to Google and a blogger: Rough Type. He is also an Edge contributor. Danny Hillis disagrees with his argument. Here is Hillis's comment which is hopefully the beginning of an interesting Edge Reality Club discussion. —JB]

ON "IS GOOGLE MAKING US STUPID"
By Nicholas Carr

W. Daniel Hillis, Kevin Kelly, Larry Sanger, George Dyson, Jaron Lanier, Douglas Rushkoff



ATLANTIC MONTHLY

July/August 2008

What the Internet is doing to our brains

IS GOOGLE MAKING US STUPID
By Nicholas Carr

...I think I know what's going on. For more than a decade now, I've been spending a lot of time online, searching and surfing and sometimes adding to the great databases of the Internet. The Web has been a godsend to me as a writer. Research that once required days in the stacks or periodical rooms of libraries can now be done in minutes. A few Google searches, some quick clicks on hyperlinks, and I've got the telltale fact or pithy quote I was after. Even when I'm not working, I'm as likely as not to be foraging in the Web's info-thickets—reading and writing e-mails, scanning headlines and blog posts, watching videos and listening to podcasts, or just tripping from link to link to link. (Unlike footnotes, to which they're sometimes likened, hyperlinks don't merely point to related works; they propel you toward them.)

For me, as for others, the Net is becoming a universal medium, the conduit for most of the information that flows through my eyes and ears and into my mind. The advantages of having immediate access to such an incredibly rich store of information are many, and they've been widely described and duly applauded. "The perfect recall of silicon memory," Wired's Clive Thompson has written, "can be an enormous boon to thinking." But that boon comes at a price. As the media theorist Marshall McLuhan pointed out in the 1960s, media are not just passive channels of information. They supply the stuff of thought, but they also shape the process of thought. And what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.

I'm not the only one. When I mention my troubles with reading to friends and acquaintances—literary types, most of them—many say they're having similar experiences. The more they use the Web, the more they have to fight to stay focused on long pieces of writing. ...

...


PERMALINK


W.DANIEL HILLIS [7.10.08]

Nicholas Carr is correct in noticing that something is "Making us Stupid", but it is not Google. Think of Google as a life preserver, thrown to us in a rising flood. True, we use it to stay on the surface, but it is not for the sake of laziness. It is for survival.

The flood that is drowning us is, of course, the flood of information, a metaphor so trite that we have ceased to question it. If the metaphor was new we might ask, where exactly is this flood coming from? Is it a consequence of advances in communication technology? The power of media companies? Is it generated by our recently developed weakness for information snacks? All of these trends are real, but I believe they are not the cause. They are the symptoms of our predicament.

Fast communication, powerful media and superficial skimming are all creations of our insatiable demand for information. We don't just want more, we need more. While we complain about the overload, we sign up for faster internet service, in-pocket email, unlimited talk-time and premium cable. In the mist of the flood, we are turning on all the taps.

So why do we need so much information? Here is where we can blame technology, at least in part. Technology has destroyed the isolation of distance, so more of what happens matters to us. It is not just that the world has gotten more complicated (it has), but rather that more of the world has become relevant. Not only is world more connected (or, as Thomas Friedman would, say, flatter), but it is also bigger. There are more people, and more of them than ever have the resources to do something that matters to us. We need to know more because our world is bigger, flatter, and more complex.

Besides technology, we must also blame politics. We need to know more because we are expected to make more decisions. I can choose my own religion, my own communications carrier, and my own health care provider. As a resident of California, I vote my opinion on the generation of power, the definition of marriage and the treatment of farm animals. In the olden days, these kinds of things were decided by the King.

I do not mean to suggest that all the information we gather is for civic purposes. That I need to know more to do my job goes without saying, but I also need to know more just to have friends. I manage to get by without knowing exactly why Paris Hilton is famous, but I cannot fully participate in society without knowing that she is well known. Of course, my own social clan has its own Charlie Rose version of celebrities, complete with must-read books, must-understand ideas, and must-see films. I am expected to have an opinion about the latest piece in The Atlantic or the New Yorker. Actually, I need to learn more just to understand the cartoons.

We evolved in a world where our survival depended on an intimate knowledge of our surroundings. This is still true, but our surroundings have grown. We are now trying to comprehend the global village with minds that were designed to handle a patch of savanna and a close circle of friends. Our problem is not so much that we are stupider, but rather that the world is demanding that we become smarter. Forced to be broad, we sacrifice depth. We skim, we summarize, we skip the fine print and, all too often, we miss the fine point. We know we are drowning, but we do what we can to stay afloat.

As an optimist, I assume that we will eventually invent our way out of our peril, perhaps by building new technologies that make us smarter, or by building new societies that better fit our limitations. In the meantime, we will have to struggle. Herman Melville, as might be expected, put it better: "well enough they know they are in peril; well enough they know the causes of that peril; nevertheless, the sea is the sea, and these drowning men do drown."


KEVIN KELLY [7.11.08]

Will We Let Google Make Us Smarter?

Is Google making us stupid? 

That's the tiltle of provocator Nick Carr's piece in this month's Atlantic. Carr is a self-admitted worrywart, who joins a long line of historical worrywarts worrying that new technologies are making us stupid. In fact Carr does such a fine job of rounding up great examples of ancient worrywarts getting it all wrong, it's hard to take his own worry seriously.

For instance as evidence that new technologies can make us stupid he offers this story about the German writer Nietzsche. Near the end of his life Nietzsche got so blind and old he could not write with a pen but learned to touch type (no sight needed) on a Malling-Hansen Writing Ball typewriter. (BTW, this  device is one of the coolest gizmos I've seen. Check out the video here. )

Writingball

But...

Under the sway of the machine, writes the German media scholar Friedrich A. Kittler, Nietzsche's prose "changed from arguments to aphorisms, from thoughts to puns, from rhetoric to telegram style."

So was his change in style due to switching to a machine or was it because Nietzsche was ill and dying?

Likewise, is the ocean of short writing the web has generated due to our minds are getting dumber and incapable of paying attention to long articles, as Carr worries, or is it because we finally have a new vehicle and market place for loads of short things, whereas in the past it short was unprofitable to produce in such quantity? I doubt the former and suspect the latter is the better explanation.

Carr begins his piece describing how smarter he is while using Google. What if Carr is right? What if we were getting dumber when we are off Google, but we were getting loads smarter while we were on Google?  That doesn't seem improbable, and in fact seems pretty likely.

Question is, do you get off Google or stay on all the time?

I think that even if the penalty is that you lose 20 points of your natural IQ when you get off Google AI, most of us will choose to keep the 40 IQ points we gain by jacking in all the time.

At least I would.

[See "Will We Let Google Make Us Smarter?" on Kevin Kelly's Blog—The Technium]


LARRY SANGER [7.11.08]

Carr's essay is interesting, but his aim is off. On the one hand, he is probably right that many of us have a tendency to sample too much of everything from the Internet's information buffet—leading to epistemic indigestion. We ought to be reading more books—including more classics—or so I think. On the other hand, he is wrong to present the problem as a collective, techno-social one, beyond our individual control, a problem to be blamed on programmers, and treated mainly by social psychologists or technocrats rather than by the philosophers and humanists. Let me elaborate.

Carr identifies an important problem. He begins with the valid observation that many of us seem to be reading smaller and smaller snippets of text. Is it any wonder that Twitter is so popular? But ultimately, Carr implies—also correctly—the problem is the weakening of our ability to think things through for ourselves. Sadly, some even glorify and encourage this disturbing trend. Remember 2005's Blink: The Power of Thinking Without Thinking? Revolutionary times cry out for principled, systematic thought, for deep self-reflection. But, as Carr points out, the information revolution itself makes it too easy for us to shrink our attention even more than before and follow the crowd.

But ultimately we have no one to blame but ourselves for this. If some of us no longer seem to be able to read a book all the way through, it isn't because of Google or the vast quantity of information on the Internet. To say that is to buy into a sort of determinism that ultimately denies the very thing that makes us most human and arguably gives us our dignity: our ability to think things through, particularly in depth, in a way that can lead to our changing our minds in deep ways.

It is ridiculous to bemoan a state which is self-created; that is a sign of weakness of will, of indiscipline, not of victimhood. Carr actually blames it on "computer engineers and software coders" who build things like Google—which is silly. Indeed, to that extent, Carr profoundly misunderstands the nature of the problem: to pretend that you can blame others (programmers, no less!) for your unwillingness to think long and hard is only a sign of how the problem itself resides within you. It is ultimately a problem of will, a failure to choose to think. If that is a problem of yours, you have no one to blame for it but yourself.


GEORGE DYSON [7.11.08]

Nicholas Carr asks a question that all of us should be asking ourselves:

"What if the cost of machines that think is people who don't?"

It's a risk. "The ancestors of oysters and barnacles had heads. Snakes have lost their limbs and ostriches and penguins their power of flight. Man may just as easily lose his intelligence," warned J. B. S. Haldane in 1928.

We will certainly lose some treasured ways of thinking but the next generation will replace them with something new. The present generation has no childhood immunity to web-based stupidity but future generations will.

I am more worried by people growing up unable to tie a bowline, sharpen a hunting knife, or rebuild a carburetor than I am by people who don't read books. Perhaps books will end up back where they started, locked away in monasteries (or the depths of Google) and read by a select few.

We are here (on Edge) because people are still reading books. The iPod and the MP3 spelled the decline of the album and the rise of the playlist. But more people are listening to more music, and that is good.


JARON LANIER [7.14.08]

The thing that is making us stupid is pretending that technological change is an autonomous process that will proceed in its chosen direction independently of us.

It is certainly true that particular technologies can make you stupid. Casinos, dive bars, celebrity tabloids, crack cocaine…

And certainly there are digital technologies that don’t bring out the best or brightest aspects of human nature. Anonymous comments are an example.

The one thought that does the most to make technology worse is the thought that there is only one axis of choice, and that axis runs from pro- to anti-.

Designers of digital experiences should rejoice when an articulate critic comes along, because that’s a crucial step in making digital stuff better.


DOUGLAS RUSHKOFF [7.14.08]

Back in 1995 I argued that we're looking at net-literate kids all wrong—that we were like fish bemoaning the fact that their children had evolved legs, walked on land, and in the process lost the ability to breathe underwater.

I'm not quite as optimistic as I was then, and largely because we have remained fairly ignorant of the biases of media as we move from one system to the other. It's less a matter of "is this a good thing or a bad thing"—or, in Carr's terminology, "smart or dumb" thing—than it is an issue of how conscious we are of each medium's strengths, and how consciously we move from one to another.

The problem with the Internet medium (or strength, as Malcolm Gladwell would argue) is how it pushes us towards "thin-slicing" or grazing information rather than digging in more deeply and considering it. Like a New Yorker piece that gives people the self-congratulatory and ultimately reassuring tidbits they need to discuss an issue at a cocktail party, the Web feeds in more bite-size doses.

The Web's strength, however, is in providing its text in more conversational and collaborative contexts. While print is biased towards the person (with a lot of time) sitting in his or her study and reading very much alone, the Web opens possibilities for more shared explorations. Like this one right here.

So the key, as I see it, is understanding the biases of the medium—as McLuhan would advise. We might learn to see our movement from one dominant medium to another less as a net gain or loss, but rather as a shift of landscape that can be exploited quite positively if we take the time and energy to honestly survey the characteristics and opportunities of the new terrain.



article
THE GUARDIAN
July 12, 2008

From Obama to Cameron, why do so many politicians want a piece of Richard Thaler?

Richard Thaler, Professor of Behavioral Science and Economics at the Graduate School of Business, University of Chicago. Photograph: Felix Clay

What is the big idea of Richard Thaler, the economist quoted by David Cameron and Barack Obama? It comes down to this: you're not as smart as you think. Humans, he believes, are less rational and more influenced by peer pressure and suggestion than governments and economists reckon.

"Economists assume people have brains like supercomputers that can solve anything," says Thaler. "But human minds are more like really old Apple Macs with slow processing speeds and prone to frequent crashes."

According to this view, voters are less Mr Spock than Homer Simpson and they could do with a bit of help - what Thaler terms a "nudge" - to save more, eat more healthily and do all the other things that they know they should.

Cameron is so interested in the idea that in a speech last month he mentioned Thaler, his co-author Cass Sunstein and even the fact they had a new book out, Nudge. He then summed up their argument: "One of the most important influences on people's behaviour is what other people do ... with the right prompting we'll change our behaviour to fit in with what we see around us." It was surely the best plug two Chicago academics with a book about the obscure discipline of behavioural economics could hope for. ...

...


article
NEW YORKER
July 21, 2008

ANNALS OF SCIENCE

SURFING THE UNIVERSE
Benjamin Wallace-Wells

ANNALS OF SCIENCE about physicist Garrett Lisi’s “An Exceptionally Simple Theory of Everything.” Writer describes Lisi giving a talk at a conference in Morelia, Mexico in June of 2007. The conference was attended by the top researchers in a field called loop quantum gravity, which has emerged as a leading ...

...


article
THE TIMES

July 14, 2008

Why Barack Obama and David Cameron are keen to 'nudge' you

Richard Thaler, professor of economics and behavioural science at Chicago Graduate School of Business, talks about his new book and why nudging has caught the imagination of top politicians

Carol Lewis

Download our podcast to hear Richard Thaler, professor of economics and behavioural science at Chicago Graduate School of Business and co-author of Nudge, explains the concept of nudging and how it could lead to better forms of government. Both the Conservative leader, David Cameron, and Democratic presidential candidate, Barack Obama, have expressed an interest in what is being dubbed the new third way.

What is a nudge? - Nudge is the title of a new book by Richard Thaler and Harvard Law Professor Cass R Sunstein. The authors explain in the book that nudges are not mandates, they are gentle non-intrusive persuaders such as default rules, incentives, feedback mechanisms, social cues, which influence your choice in a certain direction. However, they can be ignored - it is your choice to be nudged. For example, putting fruit at eye level in a school canteen to encourage healthy eating is a nudge, banning junk food is not.

Doesn't sound very academic? - The academic term for a nudge is libertarian paternalism. Described by Thaler and Sunstein as "a relatively weak, soft, and non-intrusive type of paternalism where choices are not blocked, fenced off, or significantly burdened. A philosophic approach to governance, public or private, to help homo sapiens who want to make choices that improve their lives, without infringing on the liberty of others." ...

...


article
THE BUSINESS TIMES (SINGAPORE)

July 12-13, 2008

PSYCHOLOGY'S AMBASSADOR TO ECONOMICS

The father of behavioural economics Daniel Kahneman talks to VIKRAM KHANNA about cognitive illusions, investor irrationality and measures of well-being

...Many mainstream economists still view behavioural economics with a mixture of curiosity and suspicion, but they are increasingly coming around, because some of its findings are too compelling to ignore.

Prof Kahneman does not however consider himself an economist. "Absolutely not," he says. "I study judgement and decision- making. I never really made a transition into the field of economics. What happened is that some economists became interested in our work. I learnt some economics from my friends over the years, but these were friends who were interested in what I was doing."

It is evident from Prof Kahneman's deeply introspective autobiography that his interest in the workings of the human mind goes back to his childhood. At the age of seven, in German- occupied France, he was already convinced, as his mother had told him, that "people were endlessly complicated and interesting".

About this fundamental truth, he was to discover more and more, in a lifetime of study of the human psyche. One of his key findings was that people suffer from various cognitive illusions, which affect their decisions and their behaviour. He has documented scores of these and inspired other researchers to find even more. ...


article

WASHINGTON POST
July 13, 2008

Jason Calacanis' First New Email Post

Nik Cubrilovic TechCrunch.com

Jason Calacanis announced on Friday that he was retiring from blogging. There was a very mixed reaction to the news, with most believing it to be a publicity stunt. Jason said in his farewell post that instead of blogging, he would instead be posting to a mailing list made up of his followers, capped at 750 subscribers. That subscriber limit was reached very quickly, and today Jason sent out his first new 'post' to that mailing list, which we have included below.

We expect that moving his posts to a mailing list will not achieve what he has set out for - and that is to have a conversation with the top slice of his readers. Instead, you will likely see his emails re-published, probably on a blog and probably with comments and everything else.

> From: "Jason Calacanis"> Date: July 13, 2008 11:16:15 AM PDT> To: [email protected]> Subject: [Jason] The fallout (from the load out)>> Brentwood, California> Sunday, July 12th 11:10AM PST.> Word Count: 1,588> Jason's List Subscriber Count: 1,095> List: http://tinyurl.com/jasonslist>> Team Jason,>> Wow, it's been an amazing 24 hours since I officially announced my> retirement from blogging ( http://tinyurl.com/jasonretires ). .... John Brockman explained to me at one time that some> of the most interesting folks he's met have, over time, become less> vocal. He explained, that there was a inverse correlation between your> success and your ability to tell the truth. When I met John I was> nobody and I promised myself I would never, ever censor myself if I> become successful. ... Comments on blogs inevitably implode, and we all accept it> under the belief that "open is better!" Open is not better. Running a> blog is like letting a virtuoso play for 90 minutes are Carnegie Hall,> and then seconds after their performance you run to the back Alley and> grab the most inebriated homeless person drag them on stage and ask> them what they think of the performance they overheard in the Alley.> They then take a piss on the stage and say "F-you" to the people who> just had a wonderful experience for 90 or 92 minutes. That's openness> for you¿ my how far we've come! We've put the wisdom of the deranged> on the same level as the wisdom of the wise.>> You and I now have a direct relationship, and I'm cutting the mailing> list off today so it stays at 1,000 folks. I'll add selectively to> the list, but for now I'm more interested in a deep relationship with> the few of you have chosen to make a commitment with me. Perhaps some> of you will become deep, considered colleagues and friends¿something> that doesn't happen for me in the blogosphere any more.>> Much of my inspiration for doing this comes from what I've seen with> John Brockman's Edge.org email newsletter. When it enters my inbox I'm> inspired and focused. I print it, and I don't print anything. The> people that surround him are epic, and that's my inspiration¿to be> surrounded by exceptional people.>>>...

...


article

NEW SCIENTIST

" July 5-11, 2008

Interview: The language detective

A WAY WITH WORDS
Jo Marchant

Everyone's favourite linguist, Steven Pinker, is known for his theory that the mental machinery behind language is innate. In his latest book, The Stuff of Thought, he asks what language tells us about how we think. He says the words and grammar we use reflect inherited rules that govern our emotions and social relationships. Jo Marchant asked Pinker why he thinks that concepts of space, time and causality are hard-wired in our brain, and why he's turning his thoughts to violence

...How do you go about working out what makes societies less violent?

By looking at historical records. One hypothesis is that the development of a judicial system can mitigate people's thirst for vengeance: they can present their grievances to a disinterested party and see the offender punished, rather than going the route of vendettas and blood feuds. That can be tested by looking at violence rates after a judicial system is introduced, or by comparing similar societies with and without a judicial system. Another hypothesis is that trade diminishes violence. If you want what someone else has, you buy it from him rather than kill him.

Do you hope to find answers that can be applied to society in the future?

I hope so. People like to moralise about violence - to say that there are bad people who like war, and good people who like peace, and that we need to make people more peace-loving. Perhaps, but that should be treated as a testable hypothesis, not a self-evident truth. Does pacifism lead to a less violent society, or does it lead to appeasement, and hence to more violence? I hope that violence can be treated as an empirical, not just a moral, question.

...


HIGHFIELD NAMED EDITOR OF NEW SCIENTIST

ROGER HIGHFIELD, award-winning Science Editor of The Daily Telegraph, where he worked for more than 20 years, has been named as the next Editor of New Scientist magazine, which is now the world's biggest selling weekly science and technology magazine.

Jeremy Webb, New Scientist's Editor-in-Chief, said: "Roger is a formidable force in science journalism. He has immense knowledge and wisdom and is brimming with new ideas. We are expanding in the US, into new markets in India and elsewhere, and improving our web offering. The magazine is right at the centre of all these efforts and we need a strong, creative editor to lead it. I can't wait to start working with Roger."

Before starting at The Daily Telegraph, Highfield was News Editor of Nuclear Engineering International and clinical reporter for Pulse, the magazine for family doctors. He has an MA and DPhil in chemistry from the University of Oxford and spent time working as a scientist at Unilever and Institut Laue Langevin, Grenoble, France, where he became the first person to bounce a neutron off a soap bubble. He is the author of six popular science books and an Edge contributor.

Roger Highfield's Edge Bio Page



Paperback - US $10.17, 336 pp Harper Perennial   Hardcover - UK £9.09 352 pp Free Press, UK

What Are You Optimistic About?: Today's Leading Thinkers on Why Things Are Good and Getting Better Edited by John Brockman Introduction by Daniel C. Dennett

"The optimistic visions seem not just wonderful but plausible." Wall Street Journal "Persuasively upbeat." O, The Oprah Magazine "Our greatest minds provide nutshell insights on how science will help forge a better world ahead." Seed "Uplifting...an enthralling book." The Mail on Sunday

aperback - US $11.16, 336 pp Harper Perennial   Paperback - UK £6.99, 352 pp Free Press, UK

What Is Your Dangerous Idea?: Today's Leading Thinkers on the Unthinkable Edited by John Brockman Introduction by Steven Pinker Afterword by Richard Dawkins

"Danger – brilliant minds at work...A brilliant bok: exhilarating, hilarious, and chilling." The Evening Standard (London) "A selection of the most explosive ideas of our age." Sunday Herald "Provocative" The Independent "Challenging notions put forward by some of the world’s sharpest minds" Sunday Times "A titillating compilation" The Guardian "Reads like an intriguing dinner party conversation among great minds in science" Discover

Paperback - US $11.16, 272 pp Harper Perennial   Paperback - UK £5.39 288 pp Pocket Books

What We Believe but Cannot Prove: Today's Leading Thinkers on Science in the Age of Certainty Edited by John Brockman Introduction by Ian McEwan

"An unprecedented roster of brilliant minds, the sum of which is nothing short of an oracle — a book ro be dog-eared and debated." Seed "Scientific pipedreams at their very best." The Guardian "Makes for some astounding reading." Boston Globe Fantastically stimulating...It's like the crack cocaine of the thinking world.... Once you start, you can't stop thinking about that question." BBC Radio 4 "Intellectual and creative magnificence" The Skeptical Inquirer