Edge 184 — June 8, 2006
(21,200 words)



QUANTUM MONKEYS
A Talk with Seth Lloyd




Edge Video Broadband | Modem

The image of monkeys typing on typewriters is quite old. . . .Some people ascribe it to Thomas Huxley in his debate with Bishop Wilberforce in 1858, after the appearance of The Origin of Species. From eyewitness reports of that debate it is clear that Wilberforce asked Huxley from which side of his family, his mother's or his father's, he was descended from an ape. Huxley said, "I would rather be descended from a humble ape than from a great gentleman who uses considerable intellectual gifts in the service of falsehood. " A woman in the audience fainted when he said that. They didn't have R-rated movies back then.

feature

Steven Pinker, Martin Nowak, J. Craig Venter, Lee Smolin, Alan Guth respond to Seth Lloyd's "Quantum Monkey's".

[more...]


feature
On "Digital Maoism: The Hazards of the New Online Collectivism" By Jaron Lanier

Lanier's piece hits a nerve because human life always exists in tension between our individual and group identities, inseparable and incommensurable. For ten years now, it's been apparent that the rise of the digital was providing enormous new powers for the individual. It's now apparent that the world's networks are providing enormous new opportunities for group action.

Understanding how these cohabiting and competing revolutions connect to deep patterns of intellectual and social work is one of the great challenges of our age. The breadth and depth of the responses collected here, ranging from the broad philosophical questions to reckonings of the ground truth of particular technologies, is a testament to the complexity and subtlety of that challenge. (From the Introduction by Clay Shirky)

Responses to Lanier's essay from Douglas Rushkoff, Quentin Hardy, Yochai Benkler, Clay Shirky, Cory Doctorow, Kevin Kelly, Esther Dyson, Larry Sanger, Fernanda Viegas & Martin Wattenberg, Jimmy Wales, George Dyson, Dan Gillmor, Howard Rheingold

Now, another big idea is taking hold, but this time it's more painful for some people to embrace, even to contemplate. It's nothing less than the migration from individual mind to collective intelligence. I call it "here comes everybody", and it represents, for good or for bad, a fundamental change in our notion of who we are. In other words, we are witnessing the emergence of a new kind of person.

Projects like Wikipedia do not overthrow any elite at all, but merely replace one elite — in this case an academic one — with another: the interactive media elite.
— Douglas Rushkoff

Our new tool for communication and computation may take us away from distinct individualism, and towards something closer to the tender nuance of folk art or the animal energy of millenarianism.
— Quentin Hardy

Networked-based, distributed, social production, both individual and cooperative, offers a new system, alongside markets, firms, governments, and traditional non-profits, within which individuals can engage in information, knowledge, and cultural production. This new modality of production offers new challenges, and new opportunities. It is the polar opposite of Maoism.
— Yochai Benkler

The personal computer produced an incredible increase in the creative autonomy of the individual. The internet has made group forming ridiculously easy. Since social life involves a tension between individual freedom and group participation, the changes wrought by computers and networks are therefore in tension. To have a discussion about the plusses and minuses of various forms of group action, though, is going to require discussing the current tools and services as they exist, rather than discussing their caricatures or simply wishing that they would disappear.
— Clay Shirky

Wikipedia isn't great because it's like the Britannica. The Britannica is great at being authoritative, edited, expensive, and monolithic. Wikipedia is great at being free, brawling, universal, and instantaneous.
— Cory Doctorow

The bottom-up hive mind will always take us much further that seems possible. It keeps surprising us. In this regard, the Wikipedia truly is exhibit A, impure as it is, because it is something that is impossible in theory, and only possible in practice. It proves the dumb thing is smarter than we think. At that same time, the bottom-up hive mind will never take us to our end goal. We are too impatient. So we add design and top down control to get where we want to go.
— Kevin Kelly

So, to get the best results, we have people sharpening their ideas against one another rather than simply editing someone's contribution and replacing it with another. We also have a world where the contributors have identities (real or fake, but consistent and persistent) and are accountable for their words. Much like Edge, in fact.
— Esther Dyson

How can both I reject epistemic collectivism and yet say that Wikipedia is a great project, which I do? Well, the problem is that epistemic collectivists like Wikipedia but for the wrong reasons. What's great about it is not that it produces an averaged view, an averaged view that is somehow better than an authoritative statement by people who actually know the subject. That's just not it at all. What's great about Wikipedia is the fact that it is a way to organize enormous amounts of labor for a single intellectual purpose.
— Larry Sanger

This rich context, attached to many Wikipedia articles, is known as a "talk page." The talk page is where the writers for an article hash out their differences, plan future edits, and come to agreement about tricky rhetorical points. This kind of debate doubtless happens in the New York Times and Britannica as well, but behind the scenes. Wikipedia readers can see it all, and understand how choices were made.
— Fernanda Viegas & Matthew Wattenberg

My response is quite simple: this alleged "core belief" is not one which is held by me, nor as far as I know, by any important or prominent Wikipedians. Nor do we have any particular faith in collectives or collectivism as a mode of writing. Authoring at Wikipedia, as everywhere, is done by individuals exercising the judgment of their own minds.
— Jimmy Wales

Lanier does not want to debate the existence or non-existence of metaphysical entities. But his argument that online collectivism produces artificial stupidity offers no reassurance to me. Real artificial intelligence (if and when) will be unfathomable to us. At our level, it may appear as dumb as American Idol, or as pointless as a nervous twitch that corrects and uncorrects Jaron Lanier's Wikipedia entry in an endless loop.
— George Dyson

The debate does demonstrate how much we need to update our media literacy in a digital, distributed era. Our internal BS meters already work, but they've fallen into a low and sad level of use in the Big Media world. Many people tend to believe what they read. Others tend to disbelieve everything. Too few apply appropriate skepticism and do the additional work that true media literacy requires.
— Dan Gillmor

Collective action involves freely chosen self-election (which is almost always coincident with self-interest) and distributed coordination; collectivism involves coercion and centralized control; treating the Internet as a commons doesn't mean it is communist (tell that to Bezos, Yang, Filo, Brin or Page, to name just a few billionaires who managed to scrape together private property from the Internet commons).
— Howard Rheingold


feature
On GÖDEL IN A NUTSHELL by Verena Huber-Dyson

A response from Stehpen Budiansky.

Verena Huber-Dyson I think misses some important considerations in her extrapolations from Gödel's incompleteness theorem to human mental types. She suggests that there are three types: those who authoritarian-minded, who demand completeness and skip over inconsistencies; those who are scientific, who panic to the point of going mad in the face of inconsistency; and then the mass of unimaginative mankind who are blithely unaware of either incompleteness or inconsistency.

[more...]



QUANTUM MONKEYS [5.23.06]
A Talk with Seth Lloyd




Edge Video Broadband | Modem

Introduction

Seth Lloyd is an Edgy guy. In fact he likes to work "at the very edge of this information processing revolution". He appeared at the Edge event in honor of Robert Trivers at Harvard and talked from his "experience in building quantum computers, computers where you store bits of information on individual atoms."

Ten years ago Lloyd came up with "the first method for physically constructing a computer in which every quantum — every atom, electron, and photon — inside a system stores and processes information...During this meeting, Craig Venter claimed that we're all so theoretical here that we've never seen actual data. I take that personally, because most of what I do on a day-to-day basis is to try to coax little super-conducting circuits to give up their secrets". Below is his talk along with his discussion with Steven Pinker, Martin Nowak, J. Craig Venter, Lee Smolin, and Alan Guth.

JB

SETH LLOYD iis Professor of Mechanical Engineering at MIT and a principal investigator at the Research Laboratory of Electronics. His seminal work in the fields of quantum computation and quantum communications — including proposing the first technologically feasible design for a quantum computer.

He is the author of the recently published Programming the Universe: A Quantum Computer Scientist Takes On the Cosmos.

SETH LLOYD'S Edge Bio Page


QUANTUM MONKEYS

(SETH LLOYD:) It's no secret that we're in the middle of an information-processing revolution. Electronic and optical methods of storing, processing, and communicating information have advanced exponentially over the last half-century. In the case of computational power this rapid advance known as Moore's Law.  In the 1960s, Gordon Moore, the ex-president of Intel, pointed out that the components of computers were halving in size every year or two, and consequently, the power of computers was doubling at the same rate. Moore's law has continued to hold to the present day. As a result these machines that we make, these human artifacts, are on the verge of becoming more powerful than human beings themselves in terms of raw information processing power. If you count the elementary computational events that occur in the brain or in the computer – bits flipping, synapses firing – the computer is likely to overtake the brain in terms of bits flipped per second in the next couple of decades.  

We shouldn't be too concerned, though.  For computers to become smarter than us is not really a hardware problem; it's more a software issue. Software evolves much more slowly than hardware, and indeed much current software seems to be designed to junk up the beautiful machines that we build. The situation is like the Cambrian explosion, a rapid increase in the power of hardware.  Who is smarter, humans or computers, is a question that will get sorted out some million years hence, maybe; maybe sooner. My guess would be that it will take hundreds or thousands of years until we actually get software that we could reasonably regard as useful and sophisticated. At the same time, we're going to have computing machines that are much more powerful quite soon.

Most of what I do in my everyday life is to work at the very edge of this information processing revolution. Much of what I say to you today comes from my experience in building quantum computers, computers where you store bits of information on individual atoms. About ten years ago I came up with the first method for physically constructing a computer in which every quantum – every atom, electron, and photon -- inside a system stores and processes information. Over the last ten years I've been lucky enough to work with some of the world's great experimental physicists and quantum mechanical engineers to actually build such devices. A lot of what I'm going to tell you today is informed by my experiences in making these quantum computers. During this meeting, Craig Venter claimed that we're all so theoretical here that we've never seen actual data. I take that personally, because most of what I do on a day-to-day basis is to try to coax little super-conducting circuits to give up their secrets.           

The digital information-processing revolution is only the most recent revolution, and it's by no means the greatest one. For instance, he invention of moveable type and the printing has had a much greater impact on human society so far than the electronic revolution.  There have been many information processing revolutions.  One of my favorites is the invention of the so-called Arabic — actually Babylonian — numbers, in particular, zero. This amazing invention, very useful in terms of processing and registering information, came from the ancient Babylonians and then moved to India.  It came to us through the Arabs, which is why we call it the Arabic number system. The invention of zero  allows us to write the number 10 as one zero. This apparently tiny step is in fact an incredible invention that has given rise to all sorts of mathematics, including the bits – the `binary digits' -- of the digital computing revolution.

Another information processing revolution is the invention of written language. It's hard to argue that written language is not an information-processing revolution of the first magnitude. Another of my favorites is the first sexual revolution; that is, the discovery of sex by a living organism. One of the problems with life is that if you don't have sex, then the primary means of evolution is via mutation. Almost 99.9% of mutations are bad. Being from a mechanical engineering department, I would say that when you evolve only by mutation, you have an engineering conflict: your mechanism for evolution happens to have all sorts of negative effects.  In particular, the two prerequisites for life – evolve, but maintain the integrity of the genome – collide. This is what's called a coupled design, and that's bad.  However, if you have sexual selection, then you can combine genes from different genomes and get lots of variation without, in principal, ever having to have a mutation. Of course, you still have mutations, but you get a huge amount of variation for free.

I wrote a paper a few years ago that compared the evolutionary power of human beings to that of bacteris. The point of comparison was the number of bits per second of new genetic combinations that a population of human beings generated, compared with the number generated by a culture of bacteria. A culture of bacteria in a swimming pool of seawater has about a trillion bacteria, reproducing once every thirty minutes.  Compare this with the genetic power of a small town with a few thousand people in New England — say Peyton Place — reproducing every thirty years.  Despite the huge difference in population, Peyton Place can generate as many new genetic combinations as the culture of bacteria a billion times more numerous. This assumes that the bacteria are only generating new combinations via mutation, which of course they don't, but for this purpose we will not discuss bacteria having sex. In daytime TV the sexual recombination and selection happens much faster, of course.

Sexual reproduction is a great revolution.   Then of course, there's the grandmother or granddaddy of all information processing revolutions, life itself. The discovery, however it came about, that information can be stored and processed genetically and that this could be used to encode functions inside an organism that can reproduce is an incredible revolution. It happened four to five billion years ago on Earth, maybe earlier if one believes that life developed elsewhere and then was transported here.  At any rate, since the universe is only 13.8 billion years old, it happened sometime in the last 13.8 billion years.

We forgot to talk  about the human brain (or should I say, my brain forgot to talk about the brain?).  There are many information-processing revolutions, and I'm presumably leaving out many thousands that we don't even know about, but which were equally important as the ones we've discussed.

To pull a Kuhnian maneuver, the main thing that I'd like to point out about these information processing revolutions is that each one arises out of the technology of the previous one.  Electronic information processing, for instance, comes out of the notion of written language, of having zeroes and ones, the idea that you can make machines to copy and transmit information. A printing press is not so useful without written language.  Without spoken language, you wouldn't come up with written language. It's hard to speak if you don't have a brain. And what are brains for but to help you have sex? You can't have sex without life. Music came from the ability to make sound, and the ability to make sound evolved for the purpose of having sex. You either need vocal chords to sing with or sticks to beat on a drum with.  To make sound, you need a physical object. Every information processing revolution requires either living systems, electromechanical systems, or mechanical systems.  For every information processing revolution, there is a technology.

OK, so life is the big one, the mother of all information processing revolutions.  But what revolution occurred that allowed life to exist? I would claim that, in fact, all information processing revolutions have their origin in the intrinsic computational nature of the universe. The first information processing revolution was the Big Bang. Information processing revolutions come into existence because at some level the universe is constructed of information. It is made out of bits.

Of course, the universe is also made out of elementary particles, unknown dark energy, and lots of other things. I'm not advocating that we junk our normal picture of the universe as being constructed out of quarks, electrons, and protons. But in fact it's been known, ever since the latter part of the 19th century, that every elementary particle, every photon, every electron, registers a certain number of bits of information.  Whenever two elementary particles bounce off of each other, those bits flip.  The universe computes. 

The notion that the universe is, at bottom, processing information sounds like some radical idea.  In fact, it's an old discovery, dating back to Maxwell, Boltzmann and Gibbs, the physicists who developed statistical mechanics from 1860 to 1900. They showed that, in fact, the universe is fundamentally about information. They, of course, called this information entropy, but if you look at their scientific discoveries through the lens of twentieth century technology, what in fact they discovered was that entropy is the number of bits of information registered by atoms.   So in fact, it's scientifically uncontroversial that the universe at bottom is processing information.  My claim is that this intrinsic ability of the universe to register and process information is actually responsible for all the subsequent information processing revolutions.

How do we think of information these days? The contemporary scientific view of information is based on the theories of Claude Shannon. When Shannon came up with his fundamental formula for information he went to the physicist and polymath John von Neumann and said, "What shall I call this?" and von Neuman said, "You'll call it H, because that's what Boltzmann called it," referring to Boltzmann's famous H Theorem. The founders of information theory were very well aware that the formulas they were using had been developed back in the 19th century to describe the motions of atoms. When Shannon talked about the number of bits in a signal that can be sent down a communications channel, he was using the same formulas to describe it that Maxwell and Boltzmann used to describe the amount of information, or the entropy, required to describe the positions and momenta of a set of interacting particles in a gas.

What is a bit of information? Let's get down to the question of what information is. When you buy a computer you ask how many bits its memory can register.  A bit comes from a distinction between two different possibilities. In a computer a bit is a little electric switch, which can be open or closed; or it's a capacitor that can be charged, which is called 1, or uncharged, which is called 0. Anything that has two distinct states registers a bit of information. At the elementary particle level a proton can have two distinct states: spin up or spin down. Each proton registers one bit of information.  In fact, the proton registers a bit whether it wants to or not, or whether this information is interpreted or not. It registers a bit merely by the fact of existing.  A proton possesses two different states and so registers a bit.

We exploit the intrinsic information processing ability of atoms when building quantum computers, because many of our quantum computers consist of arrays of protons interacting with their neighbors, each of which stores a bit. Each proton would be storing a bit of information whether we were asking them to flip those bits or not. Similarly, if you have a bunch of atoms zipping around, they bounce off each other. Take two helium atoms in a child's balloon. The atoms come together, and they bounce off each other, and then they move apart again. Maxwell and Boltzmann realized that there's essentially a string of bits that attach to each of these atoms to describe its position and momentum. When the atoms bounce off each other the string of bits changes because the atoms' momentum changes.   When the atoms collide, their bits flip.

The number of bits registered by each atom is well known and has been quantified ever since Maxwell and Boltzmann.  Each particle — for instance each of the molecules in this room — registers something on the order of 30 or 40 bits of information as it bounces around. This feature of the universe — that it registers and processes information at its most fundamental level — is scientifically  uncontroversial, in the sense that it has been known for 120 years and is the accepted dogma of physics.

The universe computes.  My claim is that this intrinsic information processing ability of the universe is responsible for the remainder of the information processing revolutions we see around us, from life up to electronic computers.  Let me repeat the claim: it's a scientific fact that the universe is a big computer. More technically, the universe is a gigantic information processor that is capable of universal computation.  That is the definition of a computer.

If he were here Marvin Minsky would say, "Ed Fredkin and Konrad Zuse back in the 1960s claimed that the universe was a computer, a giant cellular automaton." Konrad Zuse was the first person to build an electronic digital computer around 1940. He and Ed Fredkin at MIT came up with this idea that the universe might be a gigantic type of computer called a cellular automaton. This is an idea that has since been developed by Stephen Wolfram. The idea that the universe is some kind of digital computer is, in fact, an old claim as well.

Thus, my claim that the universe computes is an old one dating back at least half a century.  This claim could actually be substantiated from a scientific perspective. One could prove, by looking at the basic laws of physics, that the universe is or is not a computer, and if so, what kind of computer it is. We have very good experimental evidence that the laws of physics support computation. I own a computer, and it obeys the laws of physics, whatever those laws are. We know the universe supports computation, at least on a macroscopic scale. My claim is that the universe supports computation at its most tiny scale. We know that the universe processes information at this level, and we know that at the larger level it's capable of doing universal computations and creating things like human beings. The thesis that the universe is, at bottom, a computer, is in fact an old notion.  The work of Maxwell, Boltzmann, and Gibbs established the basic computational framework more than a century ago.  But for some reason, the consequences of the computational nature of the universe have yet to be explored in a systematic way.  What does it mean to us that they universe computes?  This question is worthy of significant scientific investigation.  Most of my work investigates the scientific consequences of the computational
universe.

One of the primary consequences of the computational nature of the universe is that the complexity that we see around us arises in a natural way, without outside intervention.  Indeed, if the universe computes, complex systems like life must necessarily arise.  So describing the universe in terms of how it processes information, rather than describing it solely in terms of the interactions of elementary particles, is not some kind of empty exercise.  Rather, the computational nature of the universe has dramatic consequences.

Let's be more explicit about why something that's computationally capable, like the universe, must necessarily spontaneously generate the kind of complexity that's around us.  There's a famous story, `Inflexible Logic,' by Russell Maloney, which appeared in The New Yorker in 1940 in which a wealthy dilettante hears the phrase that if you had enough monkeys typing then they would type the works of Shakespeare. Because he's got a lot of money he assembles a team of monkeys and a professional trainer, and he has them start typing. At a cocktail party he has an argument with a Yale mathematician, who says that this is really implausible, because any calculation of the odds of this happening will show it will never happen. The gentleman invites the mathematician up to his estate in Greenwich, Connecticut, and he takes him to where the monkeys have just started to write out Tom Sawyer and Love's Labour's Lost. They're doing it, without any single mistake. The mathematician is so upset that he kills all the monkeys. I'm not sure what the moral of this story is.

The image of monkeys typing on typewriters is quite old. I spent a fair amount of time this summer going over the Internet and talking with various experts around the world about the origins of this story. Some people ascribe it to Thomas Huxley in his debate with Bishop Wilberforce in 1858, after the appearance of The Origin of Species. From eyewitness reports of that debate it is clear that Wilberforce asked Huxley from which side of his family, his mother's or his father's, he was descended from an ape. Huxley said, "I would rather be descended from a humble ape than from a great gentleman who uses considerable intellectual gifts in the service of falsehood." A woman in the audience fainted when he said that. They didn't have R-rated movies back then.

Although Huxley made a stirring defense of Darwin's theory of natural selection during this debate, and although he did refer to monkeys, apparently he did not talk  about monkeys typing on typewriters, because for one thing typewriters  as we know them had barely been invented in 1859.  The erroneous attribution of the image of typing monkeys to Huxley seems to have arisen because  Arthur Eddington, in 1928, speculated about monkeys typing all the books in the British Library. Subsequently, Sir James Jeans ascribed the typing monkeys to Huxley.

In fact, it seems to have been the French mathematician Emile Borel, who came up with the image  of typing monkeys in 1907. Borel was the person who developed the modern mathematical theory of combinatorics.  Borel imagined a million monkeys each typing ten characters a second at random.  He pointed out that these monkeys could in fact produce all the books in all the richest libraries of the world.  He then went on to dismiss probability of them doing so as infinitesimally small.

It is true that the monkeys would, in fact, type gibberish. If you plug in "monkeys typing" into Google, you'll find a website that will enlist your computer to emulate typing monkeys. The site lists records of how many monkey years it takes to type out the opening bits of various Shakespeare plays and the current record is 17 characters of Love's Labour's Lost over 483 billion billion monkey years. Monkeys typing on typewriters generate random gobbledygook.

Before Borel, Boltzmann advanced a `monkeys typing' explanation for why the universe is complex.  The universe, he said, is just a big thermal fluctuation. Like the flips of a coin, the universe is in fact just random information. His colleagues soon dissuaded him from this position, because it's obviously not so. If it were, then every new bit of information you got that you hadn't received before would be random.  But when our telescopes look out in space, they get new information all the time and it's not random. Far from it: the new information they gather is full of structure. Why is that?

To see why the universe is full of complex structure, imagine that the monkeys are typing into a computer, rather than a typewriter. The computer,  in turn, rather than just running Microsoft Word, interprets what the monkeys type as an instruction in some suitable computer language, like Java.  Now, even though the monkeys are still typing gobbledygook, something remarkable happens.  The computer starts to generate complex structures. 

At first this seems odd: garbage in, garbage out.  But in fact, there are short, random looking computer programs that will produce very complicated structures.  For example, one short, random looking program will make the computer start proving all provable mathematical theorems.  A second short, random looking program will make the computer evaluate the consequences of the laws of physics. There are computer programs to do many things, and you don't need a lot of extra information to produce all sorts of complex phenomena from monkeys typing into a computer.

There's a mathematical theory called algorithmic information, which can be thought of as the theory of what happens when monkeys type into computers.  This theory was developed in the early 1960s by Ray Solomonoff in Cambridge, Mass.,  Gregory Chapin who was then a 15-year-old enfant terrible at IBM in Brazil, and then Andrey Kolmogorov, who was a famous Russian academic mathematician. Algorithmic information theory  tells you the probability of producing complex patterns from randomly programmed computers. The bottom line is that if monkeys start typing into computers, there's a very high probability that they'll produce things like the laws of chemistry, autocatalytic sets, or prebiotic kinds of life. Monkeys typing into computers make up  a reasonable explanation for why we have complexity in our universe.

Monkeys typing into a computer have a reasonable probability of producing almost any computable form of order that exists. You would not be surprised in this monkey universe to see all sorts of interesting things arising. You might not get Hamlet, because something like Hamlet requires huge sophistication and the evolution of societies, etc. But things like the laws of chemistry, or autocatalytic sets, or some kind of prebiotic form of protolife are the kinds of things that you would expect to see happen.

To apply this explanation to the origin of complexity in our universe we need two things: a computer, and monkeys.  We have the computer, which is the universe itself.  As was pointed out a century ago, the universe registers and processes information systematically at its most fundamental level. The machinery is there to be typed on.  So all you need is monkeys.  Where do you get the monkeys?

The monkeys that program our universe are supplied by the laws of quantum mechanics. Quantum mechanics is inherently chancy. You may have heard Einstein's phrase, "God does not play dice." Einstein was wrong. God does play dice. In the case of quantum mechanics, Einstein was, famously, wrong. In fact, it just when God plays dice that these little quantum blips or fluctuations get programmed into our universe. For example, Alan Guth has done work on how such quantum fluctuations form the seeds for the formation of large-scale structure in the universe. Why is our galaxy here rather than somewhere a hundred million light years away? It's here because way back in the very, very, very, very early universe there was a little quantum fluctuation that made a slight over-density of matter somewhere near here. This over density of matter was very tiny, but that was enough to make a seed around with other matter could clump. The structure that we see like the large-scale structure of the universe is in fact made by quantum monkeys typing.

We have all the ingredients, then, for a reasonable explanation of why the universe is complex.  You don't require very complicated dynamics for the universe to compute. The computational dynamics of the universe can be very simple. Almost anything will work. The universe computes.  Then, the universe is filled with little quantum monkeys in the form of quantum fluctuations, that program it. Quantum  fluctuations get processed by the intrinsic computational power of the universes and eventually give rise to the order that we see around us.


feature

Steven Pinker, Martin Nowak, J. Craig Venter, Lee Smolin, Alan Guth

SETH LLOYD: When I give talks, I am often asked for my definition of complexity.

I wrote my Ph.D. thesis partly on different ways of defining complexity. Although I have my favorites I wouldn't advocate one over the other. Basically the monkeys typing argument  for the generation of complexity simply says that you don't have to have a preferred definition of complexity. Any structure or set of structures that you would regard as being complex will be produced by this mechanism.

If you insist that I define complexity, though, I can do so. Charlie Bennett proposed a good definition of complexity called logical depth, which says that a complex structure is one that requires a lot of computation to be produced from a simple program. If you take that idea, then the stuff that the universe has generated is exactly that logically deep stuff. The programs are simple, the computations have been going on for a long time. In fact, I can tell you exactly how many ops the universe has performed on how many bits: by applying the physics of computation you find that the universe has performed ten to the one hundred and twenty elementary operations (e.g., bit flips) on ten to the ninety bits.  That's a lot of ops on a lot of bits. What we get as a result is logically deep stuff.

STEVEN PINKER: The claim that the universe is a computer would not have much empirical content if we could not conceive what it would mean for the universe not to be a computer. Is it worth distinguishing between things that we recognize as processing information as opposed to merely containing information?  Containing information just means that you have more than one possibility.

LLOYD: So you're happy with that notion of things, continuing information?

STEVEN PINKER: It seems to me that a computer is more than something that contains information, because everything contains information. As an information processor a computer would seem to be special in two ways. One is that the information it processes  stands for something. It has a semantics as well as syntax. Among information-processing systems we’re familiar with.  written language refers to sound, sound refers to concepts, brains process information about the environment, DNA codes information about amino acids and their sequences, and so on.

Also, having information that has put into correspondence with something else, the information-processing system then attains some goal. That is, there are physical changes in the information-processor that by design (or its equivalent in evolved information-processors) is isomorphic with some relationship among the things that are represented, in some way that lead to some desirable outcome. In the case of computations it's solving equations; in the case of language it's communicating thoughts; in the case of the genetic code it's assembling functioning organisms, and so on. In all of those cases when you have a well-defined semantics and a goal-directed physical process it makes sense to talk about information processing, or in the human-made case, a computer.

But that wouldn't seem to apply to the universe. The states of all the elementary particles don't seem to stand for something else. Nor does the sequence of physical events map onto some orderly set of relationships. This would suggest that the universe is not a computer, although it contains information. Does that contradict what you're saying?

LLOYD: You've raised an important distinction. Many of the systems we regard as processing information, particularly sophisticated ones, have a notion of correspondence of a message with something else.

You seem to have a notion that computations are goal-directed. You're quite right that those kinds of features, having semantics and the notion that information corresponds to something else, are  more sophisticated. I regard those as emergent features that we can only ascribe to objects like living things, or perhaps to life itself. Those emergent features are very important.

However, it is possible for a system to register information without that information having some kind of semantic meaning. If a particle's spin can contain a bit, I would argue that you can also talk about information-processing without content.

Let me make an historical point: The great advance that Shannon made in discovering information theory was discovering that quantity of information could be stripped from its semantic content. If you ask how much information can be sent on a fiberoptic cable, you can answer that question without knowing what that information is about.  It could be MTV, it could be Romeo and Juliet — the number of bits per second traveling down the cable is the same. It is exactly by getting rid of the notion that semantic content is necessary to describe quantities of information that information theory and the mathematical theory of communication could arise.

STEVEN PINKER: Although Shannon did talk about information in terms of a correlation between what happens at the output end and what happened at the input end. They would have to correlate. It couldn't just be random bits at the input and random bits at the output.

LLOYD: Right, because if the bits were completely random then you would not have information. But what these bits referred to is unimportant for the quantity of information sent down the channel. Correlation is something that can be defined mathematically in the absence of any notion of what the bit means.

Still answering Steven's question, let me argue that in the same way  that quantity of information is defined irrespective of semantic content, information processing is defined irrespective of semantic content or the notion that some higher order or purpose is taking place. In an ordinary computer, a computer is just performing simple operations on a bit, flipping it depending on the state of another bit; that would happen regardless of whether there's some overall purpose to that bit flip, whether greater or lesser, or of any semantic content of those bits. If I take a nuclear spin of a proton in one of our quantum computers, and I flip it from spin up to spin down, now I have just flipped a bit and there may be no purpose whatsoever for it.

The question of whether information is being processed, or transformed, has a physical meaning completely apart from any mission or goal that this information is being processed for. In the same way that Shannon was able to say that you can disassociate the quantity of information from semantics, we can also strip information-processing, the notion that the bits are being flipped, from the notion that this is part of some goal-oriented process.

That's the sense in which I'm using the notion of information being processed, the physical process of information being transformed. It actually doesn't have to be part of some goal-oriented process.

STEVEN PINKER: I can see that you can define information processing in that way, so that everything is information-processing, in which case I wonder what kind of statement it is that the universe is an information processor.

The question is whether it is true by virtue of being circular. What I'm doing is offering a definition of information-processing such that it's not true that everything by definition is an information-processor. That allows me to make a statement of content. Is the universe an information-processor or not? I would think that the answer would be no, it isn't. At least, there's an interesting distinction to be made between DNA, computers, and printing presses on the one hand and the entire universe on the other. If you come up with a definition of information-processing and you can't make that distinction, then it would raises the question of whether it means anything to say that the universe is a computer or information-processor.

LLOYD: I think we're in agreement that the statement that the universe is an information-processor is true by virtue of itself. You could call it circular, but I'll just call it true. What I'm trying to explore here are the implications of this fact. You could use your definition of information processing, which is a human-based picture, associated with ideas of language or preconceptions about life. I would just say that this information processing is the result of bits flipping, and then out of this arose life, human beings, etc. The interesting questions concern why we get these emergent features, like information that has semantic content and means something important. I certainly don't say that all bits are created equal. All bits are equal physically — they each register one bit — but some bits are a heck of a lot more important than others. I don't want you flipping the bits in my DNA.

MARTIN NOWAK: I am interested in the physical properties of the universe which might lead us to expect the possibility of life? Is this based on computation? Are you saying that certain structures in the universe can compute something while others cannot? Does this chair here compute?

LLOYD: Certain structures are better at computing than others, but the universe as a whole has this capability. Different pieces of the universe process information in different ways. The whole point about a universal computer is that it can process information in any possible way. Some of these ways of processing information are a heck of a lot more interesting than others. If human beings present very interesting questions of semantics and content and purposeful information processing, I think that's good. That's what's interesting about human beings.

I  take a very physical definition of information. If the universe if computing, we have to see what the consequences of that are. The consequence is we get a very diverse universe, in which some parts of computation we regard as interesting, and some not. This chair computes itself, and you wouldn't want it to stop doing that, because if you sat on it, and the chair stopped computing its ability to hold you up – bang. You'd be on the floor. So that's pretty good computation too.

CRAIG VENTER: Your argument is basically that this computer is driving us toward order and I would argue life as a natural consequence of that. So where does decay and entropy enter into this?

LLOYD: That's really a key question. Most of the processes that you see around, particularly in life, have used the increase of entropy as a powerful mechanism that drives pieces of the system to ordered states. There's a whole physical theory of how you can get order in some part of the system at the expense of creating disorder elsewhere. According to the second law of thermodynamics the total amount of information in the system never decreases. You can't make order here without pumping that disorder out elsewhere. This may be a way of trying to discover what happened before life existed. Rather than looking for systems of genetic information we should be looking for systems that were capable of controlling the way that you create order in one place and pump disorder to another place. What kinds of systems do that? That is actually a key part of how you create order. The process of creating order has to respect the laws of physics,  and that process exploits the second law of thermodynamics to create order in one place while creating disorder elsewhere.

MARTIN NOWAK: The statement that something is a universal Turing machine requires a mathematical proof. Imagine a box of ideal gas. Is that a universal Turing machine?

LLOYD: No, typically not. Not on its own. To demonstrate that something is a universal Turing machine is not a content-free statement. You can actually ask Yes or No questions. Is the universe a classical cellular automaton as was suggested by Zuse and Fredkin? The answer is almost certainly not, because classical cellular automata can't reproduce quantum mechanical effects in any efficient way. The statement that the universe is a universal computer is not a content-free statement. When you investigate what that statement means in detail, and how the universe actually computes, you can rule out certain kinds of computation as the basis for what it's doing. You actually require a proof that the laws of physics as they stand are computationally universal in a reasonable way.

For a bunch of particles in a gas, Ed Fredkin and Norm Margolus pointed out that particles colliding off each other could perform universal computation. The problem with this model of computation is that it's chaotic. The collision of molecules is a chaotic, the information that the molecules contain degrades very rapidly, and the molecules in this room are not factoring some large number, or reporting back to Microsoft on what we're doing.

LEE SMOLIN: I'm trying to understand the same thing that you're trying to understand: how it is that complexity might come out of the laws of physics. If you agree that there are two very distinct notions of processing information that you and one Steve gave — one defines semantic content and goal-oriented behavior and one just defines evolution of the system in which we identify "bits of information" — the question we're all interested in is how the first kind gives rise to the second, or the reverse.

LLOYD: Given my contempt for theories of everything I would certainly not try to suggest that the computational theory of the universe I have advocated here solves all our problems. I disagree with the notion of insisting on semantic content because it's very hard to make the kind of definitions of information-processing that rely on semantic content precise. Philosophers of language have been trying to make such definitions precise for many, many years, and down that road lies madness. To say what it means for something to have a semantic content is hard. What you mean by goal-oriented behavior is a part of semantic content. That's why I really would like to avoid a definition of that sort, because I don't regard it as being a definition that can be made scientifically precise.

But why does this low-level information-processing that pervades everywhere in the world spontaneously give rise to this kind of high-level information processing where you have language, semantics, and goal-oriented behavior? That's, indeed, what we'd like to find out. This argument that you spontaneously produce complicated structures by no means solves that question, because there's a very detailed history of the way in which this complex behavior erupted in the first place. The nature of this history is
a very interesting question, and I certainly wouldn't say that it's been solved at all.

LEE SMOLIN: Here are two possible quantum theories of gravity: Quantum theory of gravity one has a basis of states that were given by some labeled graphs, combinatorial graphs. Quantum theory of gravity two has a basis of states given by labeled graphs embedded in a three-dimensional manifold. Since you believe in quantum mechanics this means that states have to be normalized. It means you have to sum over certain numbers and get 1. In the first case, the graph isomorphism probably could be solved, and we could write an algorithm that a computer could run to check whether a quantum state is normalizable or not. In the second case it's conjectured that the embedding of graphs in three manifolds is not a problem that's solvable by a finite algorithm. You would have to be committed to the second kind of theory being wrong and the first kind of theory being right, because if the second kind of theory were right then even testing whether a quantum state was normalizable is something that a digital computer could never do. Therefore, if the universe were a digital computer, it could not learn that kind of quantum mechanics.

LLOYD: No, I actually disagree with that. The process of testing whether a theory is correct on a digital computer is very different from the process of a digital computer being something and doing something. This, by the way, is a distinct type of unpredictability from that involved in quantum mechanics. If you have something that is a computer performing a universal digital computation, then Gödel's theorem and the Halting Problem guarantee that the only way to see what it's going to do is to let it evolve and to see what happens. Even without any kind of additional lack of determinism in terms of quantum mechanics or chaos, the fact that the universe is computing makes its future behavior — and in particular its future behavior about things like complex systems, which is what we really care about — intrinsically unpredictable. The only way to see what's going to happen is to wait and see.

ALAN GUTH: When I hear about the universe as a computer and all that, I don't really know what that means that's different from saying that the universe can be described mathematically. I would think that anything that can be described mathematically is the same sort of thing as a computer.

LLOYD: There's a technical difference between something that is described mathematically and something that is capable of universal computation. You can build machines, or indeed laws of physics, that are not capable of universal computation and they could not support things like language, etc. We don't have that kind of universe. There's something called the Chomsky hierarchy, which is a hierarchy of information-processing devices, and as you move up the hierarchy you get ever more sophisticated. At the top of the hierarchy are universal Turing machines. Our universe seems to be, in terms of its information-processing ability, at the top of the Chomsky hierarchy. But it's quite easy to build toy models that don't have this capability.

ALAN GUTH: But it's easy to build models that do have the capability.

LLOYD: Once you have some kind of non-linear interaction between things then you typically get it.

ALAN GUTH: Okay, but then the universal Turing machine idea is
telling us very little about the universe.

LLOYD: The fact that people seem to regard this whole statement that the universe is processing information as self-evident, and that it is almost self-evident that it's a universal Turing machine, is good. All I'm arguing is that we should actually look seriously at the implications of this self-evident fact.


On "Digital Maoism: The Hazards of the New Online Collectivism" By Jaron Lanier

Responses to Lanier's essay from Douglas Rushkoff, Quentin Hardy, Yochai Benkler, Clay Shirky, Cory Doctorow, Kevin Kelly, Esther Dyson, Larry Sanger, Fernanda Viegas & Martin Wattenberg, Jimmy Wales, George Dyson, Dan Gillmor, Howard Rheingold


Now, another big idea is taking hold, but this time it's more painful for some people to embrace, even to contemplate. It's nothing less than the migration from individual mind to collective intelligence. I call it "here comes everybody", and it represents, for good or for bad, a fundamental change in our notion of who we are. In other words, we are witnessing the emergence of a new kind of person.

Lately, there's been a lot of news concerning the Wikipedia and other user-generated websites such as Myspace, Flickr, and others.

For example, in today's Wall Street Journal "portals" column, Lee Gomes ("Why Getting the User To Create Web Content Isn't Always Progress", June 7, 2006, p B1) writes:

"At first, it seemed like the sort of silly, self-serving thing that many companies are wont to say about their products. Only later did I realize it represented the opening of another front in the battle against traditional culture being waged by certain parts of the technology industry."

"Mash-ups", which allow active (vs. "passive") participation, is another term for "'user-generated content', referred to by the smart set as "UGC:"

...for a big part of the tech world, these sorts of mash-ups are becoming the highest form of cultural production.

This is most clearly occurring in books. Most of us were taught that reading books is synonymous with being civilized. But in certain tech circles, books have come to be regarded as akin to radios with vacuum tubes, a technology soon to make an unlamented journey into history's dustbin.

The New York Times Magazine recently had a long essay on the future of books that gleefully predicted that bookshelves and libraries will cease to exist, to be supplanted by snippets of text linked to other snippets of text on computer hard drives. Comments from friends and others would be just as important as the original material being commented on; Keats, say.

Yesterday, at a panel discussion at a Newsweek Conference on Science, Technology and Education, the moderator, Brian Williams, Anchor and Managing Editor, NBC Nightly News, spent a great deal of his time at the hour-long panel disparaging the Wikipedia.

Williams noted that NBC Nightly News was the largest news provider in America, reaching 9 to 12 million Americans, vastly more than any of the discrete digital audiences for websites; when he goes to his office and walks in the door, people are there and they are gathering the news. They are professionals, you know their names, and this is very different than anonymous contributors to the Wikipedia or other user-generated websites.

On Monday of this week, in "Digital Publishing Is Scrambling the Industry's Rules" (June 5, 2006,) Motoko Rich writes:

"Yochai Benkler, a Yale University law professor and author of the new book "The Wealth of Networks: How Social Production Transforms Markets and Freedom" (Yale University Press), has gone even farther: his entire book is available — free — as a download from his Web site. Between 15,000 and 20,000 people have accessed the book electronically, with some of them adding comments and links to the online version.

"Mr. Benkler said he saw the project as "simply an experiment of how books might be in the future." That is one of the hottest debates in the book world right now, as publishers, editors and writers grapple with the Web's ability to connect readers and writers more quickly and intimately, new technologies that make it easier to search books electronically and the advent of digital devices that promise to do for books what the iPod has done for music: making them easily downloadable and completely portable.

"Not surprisingly, writers have greeted these measures with a mixture of enthusiasm and dread. The dread was perhaps most eloquently crystallized last month in Washington at BookExpo, the publishing industry's annual convention, when the novelist John Updike forcefully decried a digital future composed of free downloads of books and the mixing and matching of 'snippets' of text, calling it a 'grisly scenario.' "

John Updike's comments were also reported by Bob Thompson in The Washingon Post ("Explosive Words", May 22, 2006, p C01):

"Unlike the commingled, unedited, frequently inaccurate mass of "information" on the Web, he said, "books traditionally have edges." But "the book revolution, which from the Renaissance on taught men and women to cherish and cultivate their individuality, threatens to end in a sparkling pod of snippets".

"So, booksellers," he concluded, "defend your lonely forts. Keep your edges dry. Your edges are our edges. For some of us, books are intrinsic to our human identity."


About ten years ago, the big realization (as expounded by Wired, Nicholas Negroponte, among others) was a perceptual migration from atoms to bits, from the world of the physical to the world of information.

Now, another big idea is taking hold, but this time it's more painful for some people to embrace, even to contemplate. It's nothing less than the migration from individual mind to collective intelligence. I call it "here comes everybody", and it represents, for good or for bad, a fundamental change in our notion of who we are. In other words, we are witnessing the emergence of a new kind of person.

I've been tracking this development since 1969 when I wrote in By The Late John Brockman:

"The mass. The human mass. The impossible agglomerate mass. The incommunicable human mass. The people." From their places masses move, stark as laws. Masses of what? One does not ask. There somewhere man is too, vast conglomerate of all of nature’s kingdoms, as lonely and as bound."* The impossible people.

*Beckett, Molloy, p. 110

This isn't going away. Rather than demonize, we need to think through what's going on.

In this regard, no one is deeper, more thoughtful, on the social and economic effects of Internet technologies than Clay Shirky, a consultant and NYU professor. His writings, mostly web-based, are focused on the rise of decentralized technologies such as peer-to-peer, web services, and wireless networks that are leading us into a new world of user-generated content. As adjunct professor in NYU's graduate Interactive Telecommunications Program (ITP), he teaches courses on the interrelated effects of social and technological network topology — how our networks shape culture and vice-versa.

Shirky commands wide respect within the user-generated web community, both for his authoritative writings as well as his leadership role as a speaker. I have reached out to him for help in organizing a serious response to Jaron Lanier's essay, and he graciously accepted. The people he assembled, a "who's who" of the movers, shakers, and pundits of this new universe of collective intelligence, of the "hive mind", have written essays that are at once unfailingly interesting, maddening, thought-provoking, depressing, and a window not to the future but to where we are today.

I am now pleased to turn the proceedings over to Clay Shirky with warm thanks from Edge for his help in organizing this project. But before I get off the stage, one final note.

Shakespeare's snippets pound in my head, as I ask myself Banquo's question...

"MACBETH
...Say from whence
You owe this strange intelligence? or why
Upon this blasted heath you stop our way
With such prophetic greeting? Speak, I charge you.
Witches vanish

"BANQUO
The earth hath bubbles, as the water has,
And these are of them. Whither are they vanish'd?

"MACBETH
Into the air; and what seem'd corporal melted
As breath into the wind. Would they had stay'd!

"BANQUO
Were such things here as we do speak about?
Or have we eaten on the insane root
That takes the reason prisoner?"

JB


On "Digital Maoism: The Hazards of the New Online Collectivism" By Jaron Lanier

Introduction by Clay Shirky

When Jaron Lanier's piece on "Digital Maoism" first went out on Edge, I knew he'd be generating hundreds of responses all over the net. After talking to John Brockman, we decided to try to capture some of the best responses here.

Lanier's piece hits a nerve because human life always exists in tension between our individual and group identities, inseparable and incommensurable. For ten years now, it's been apparent that the rise of the digital was providing enormous new powers for the individual. It's now apparent that the world's networks are providing enormous new opportunities for group action.

Understanding how these cohabiting and competing revolutions connect to deep patterns of intellectual and social work is one of the great challenges of our age. The breadth and depth of the responses collected here, ranging from the broad philosophical questions to reckonings of the ground truth of particular technologies, is a testament to the complexity and subtlety of that challenge.

Clay Shirky



DOUGLAS RUSHKOFF
Media Analyst; Documentary Writer; Author,
Get Back in the Box: Innovation from the Inside Out

Despite comparing Wikipedia with the likes of American Idol, this is a more reasoned and hopeful argument than it appears at first glance. Lanier is not condemning collective, bottom-up activity as much as trying to find ways to check its development. In short, it's an argument for the mindful intervention of individuals in the growth and acceleration of this hive-mind thing called collective intelligence.

Indeed, having faith in the beneficence of the collective is as unpredictable as having blind faith in God or a dictator. A poorly developed group mind might well decide any one of us is a threat to the mother organism deserving of immediate expulsion.

Still, I have a hard time fearing that the participants of Wikipedia or even the call-in voters of American Idol will be in a position to remake the social order anytime, soon. And I'm concerned that any argument against collaborative activity look fairly at the real reasons why some efforts turn out the way they do. Our fledgling collective intelligences are not emerging in a vacuum, but on media platforms with very specific biases.

First off, we can't go on pretending that even our favorite disintermediation efforts are revolutions in any real sense of the word. Projects like Wikipedia do not overthrow any elite at all, but merely replace one elite — in this case an academic one — with another: the interactive media elite. Just because the latter might include a 14-year-old with an Internet connection in no way changes the fact that he's educated, techno-savvy, and enjoying enough free time to research and post to an encyclopedia for no pay. Although he is not on the editorial board of the Encyclopedia Britannica, he's certainly in as good a position as anyone to get there.

While I agree with Lanier and the recent spate of articles questioning the confidence so many Internet users now place in user-created databases, these are not grounds to condemn bottom-up networking as a dangerous and headless activity — one to be equated with the doomed mass actions of former communist regimes.

Kevin's overburdened "hive mind" metaphor notwithstanding, a networked collaboration is not an absolutely level playing field inhabited by drones. It is an ecology of interdependencies. Take a look at any of these online functioning collective intelligences — from eBay to Slashdot — and you'll soon get a sense of who has gained status and influence. And in most cases, these reputations have been won through a process much closer to meritocracy, and through a fairer set of filters, than the ones through which we earn our graduate degrees.

While it may be true that a large number of current websites and group projects contain more content aggregation (links) than original works (stuff), that may as well be a critique of the entirety of Western culture since post-modernism. I'm as tired as anyone of art and thought that exists entirely in the realm of context and reference — but you can't blame Wikipedia for architecture based on winks to earlier eras or a music culture obsessed with sampling old recordings instead of playing new compositions.

Honestly, the loudest outcry over our Internet culture's inclination towards re-framing and the "meta" tend to come from those with the most to lose in a society where "credit" is no longer a paramount concern. Most of us who work in or around science and technology understand that our greatest achievements are not personal accomplishments but lucky articulations of collective realizations. Something in the air. (Though attributed to just two men, discovery of the DNA double-helix was the result of many groups working in parallel, and no less a collective effort than the Manhattan Project. ) Claiming authorship is really just a matter of ego and royalties. Even so, the collective is nowhere near being able to compose a symphony or write a novel — media whose very purpose is to explode the boundaries between the individual creator and his audience.

If you really want to get to the heart of why groups of people using a certain medium tend to behave in a certain way, you'd have to start with an exploration of biases of the medium itself. Kids with computers sample and recombine music because computers are particularly good at that — while not so very good as performance instruments. Likewise, the Web — which itself was created to foster the linking of science papers to their footnotes — is a platform biased towards drawing connections between things, not creating them. We don't blame the toaster for its inability to churn butter.

That's why it would particularly sad to dismiss the possibilities for an emergent collective intelligence based solely on the early results of one interface (the Web) on one network (the Internet) of one device (the computer). The "hive mind" metaphor was just one early, optimistic futurist's way of explaining a kind of behavior he hadn't experienced before: that of a virtual community.

Now sure, there may have been a bit too many psychedelics making their way through Silicon Valley at the same time as Mac Classics and copies of James Gleick's Chaos. At the early breathless phase of any cultural renaissance, there are bound to be some teleologically suspect prognostications from those who are pioneering the fringe. And that includes you and me, both.

Still, what you saw so clearly from the beginning is that the beauty of the Internet is its ability to connect people to one another. It's not the content, it's the contact.

The Internet itself holds no philosopher's stone — there's no God to emerge from the medium. I'm with you, there. But there is something that can emerge from people engaging with one another in ways they hadn't dreamed possible, before. While the Internet itself may never produce the genuinely cooperative society so many of us yearn for, it does give us the opportunity to model the kinds of behaviors that may work back here in the real world.

In any case, the true value of the collective is not its ability to go "meta" or to generate averages but rather, quite the opposite, to connect strangers. Already, new sub-classifications of diseases have been identified when enough people with seemingly unique symptoms find one another online. Craigslist's founder is a hero online not because he has gone "meta" but because of the very real and practical connections he has fostered between people looking for jobs, homes, or families to adopt their pets. And it wasn't Craig's intellectual framing that won him this reputation, but the time and energy he put into maintaining the social cohesion of his online space.

Meanwhile, offline collectivist efforts at dis-intermediating formerly top-down systems are also creating new possibilities for everything from economics to education. Local currencies give unemployed Japanese people the opportunity to spend time caring for elders near their homes so that someone else can care for their own family members in distant regions. The New York Public School system owes any hope of a future to the direct intervention of community members, whose commune-era utopian "free school" models might make us hardened cynics cringe— but energize teachers and students alike.

I'm troubled by American Idol and the increasingly pandering New York Times as much as anyone, but I don't blame collaboration or techno-utopianism for their ills. In these cases, we're not watching the rise of some new dangerous form of digital populism, but the replacement of key components of a cultural ecology — music and journalism — by the priorities of consumer capitalism.

In fact, the alienating effects of mass marketing are in large part what motivate today's urge toward collective activity. If anything, the rise of online collective activity is itself a check — a low-pass filter on the anti-communal effects of political corruption, market forces, and strident individualism.

One person's check is another person's balance.

The "individual" Lanier would have govern the collective is itself a social construction born in the Renaissance, celebrated via democracy in the Enlightenment and since devolved into the competition, consumption, and consumerism we endure today.

While the tags adorning Flickr photographs may never constitute an independently functioning intelligence, they do allow people to participate in something bigger than themselves, and foster a greater understanding of the benefits of collective action. They are a desocialized society's first baby steps toward acting together with more intelligence than people can alone.

And watching for signs of such intelligent life is anything but boring.


QUENTIN HARDY
Silicon Valley bureau chief of
Forbes Magazine; Lecturer, U.C. Berkeley's School of Information

Jaron Lanier contends with several ideas at once. What I take out is:

• That Wikipedia is the best possible example of the collective mind. It may be the worst.

As he indicates, and others have shown before, successful collectives are something like tribes, with like-minded people assuming a common culture which they see as both valuable and fragile. It has rules, boundaries and guardians. Wikipedia is unbounded and (for the most part) ungoverned. It is a great experiment, the kind of thing that is necessary when learning to use a new tool, but that does not make it the best model.

This collective, it is worth noting, made of those individuals he cherishes. The "crowd" does not keep acclaiming Mr. Lanier's skills behind the camera; one or more people do. Even in a healthy financial market, everybody's favorite collective mind, there is plenty of mispricing.

• That it would be an absolute good if all error were eliminated. Most errors, in society and nature, are unfortunate. The process is necessary. The ill-formed and stillborn bird is the other side of species creation. We have to have error if Columbus is ever to sail off for India, so finding America, or if Leibniz is to misunderstand the I Ching, thereby exploring binary math.

• That existing definitions of the self and the crowd are permanent. Our new tool for communication and computation may take us away from distinct individualism, and towards something closer to the tender nuance of folk art or the animal energy of millenarianism. Either way, however, both "individual" and "folk" should stand as metaphors. Possibly a third thing is happening, as yet poorly understood.

At times like that, it is easy to bemoan losses and overestimate gains. Yet while the electronically enhanced collective mind is novel, but the discovery of new ways to be is not a new phenomenon in the history of human consciousness. Rather it is typical of revolutionary advances in transport or communications (and responsible for most market manias and political upheavals, as well as much progress. )

• That collectives will purportedly resolve one of the key problem of an era of media onslaught: What is successful filtering? With so much information at hand, what should we consume? Popurls doesn't offer much info in advances in diabetes management, but I read three newspapers a day and I missed it too.

It's certainly unclear that collectives will eliminate the culture of celebrity, one of the more woeful primary filters of our time. But bashing American Idol (another unbounded collective) for not advancing the cause of pop music is just strange. Pop (like most things) never threw out endless great stuff. Clay Aiken is not supposed to be John Lennon, he is the current version of Disco Tex and the Sex-o-Lets.

• That existing hierarchies are the best places to test the efficacy of the new communications tools. This is like asking the Catholic Church, circa 1475, about the uses of the printing press. Mr. Lanier is probably consulting for wealthy companies and governments, which would rather co-opt the collective phenomenon than see it authentically transform the world they know. That may be why the results there are often uninspiring.

All that said, massive kudos for suggesting some rules around collectives (i.e., "at best when not defining its own questions") that he moves toward in the last third of this essay. Getting this right will take years. That is a real service that can't happen without some belief that there is deep value in the collective.

Which is to say that Mr. Lanier does believe in the crowd, or he would not go to the trouble. What I suspect gets up his nose is the recurring failure of "the crowd," no matter what the century or the tools in question, to be clear-eyed about where it is in History: Usually someplace in the middle, but acting like we are at the beginning or end of something major, something world historic. Something that will finally afford us, as individuals and a species, a kind of certainty in Time. Something that will bring absolute judgment after all the generations. Something that will relieve each individual of the burden of being good.


YOCHAI BENKLER
Professor of Law, Yale Law School; Author, The Wealth of Networks: How Social Production Transforms Markets and Freedom

Extracting Signal From Noisy Spin

I agree with much of what Jaron Lanier has to say in this insightful essay. The flashy title and the conflation of argments, however, conspire to suggest that he offers a more general attack on distributed, cooperative networked information production, or what I have called peer production, than Lanier in fact offers.

What are the points of agreement? First, Lanier acknowledges that decentralized production can be effective at certain tasks. In these he includes science-oriented definitions in Wikipedia, where the platform more easily collates the talents, availability, and diverse motivations throughout the network than a slower-moving organization like Britannica can; free and open source software, though perhaps more in some tasks that are more modular and require less of an overall unifying aesthetic, such as interface. Second, he says these do not amount to a general "collective is always better," but rather to a system that itself needs to be designed to guard against mediocre or malicious contributions through implementation of technical fixes, what he calls "low pass filters." These parallel the central problem characterized by the social software design movement, as one can see in Clay Shirky's work. Those familiar with my own work in Coase's Penguin and since will notice that I only slightly modified Lanier's language to show the convergence of claims. Where, then, is the disagreement?

Lanier has two driving concerns. The first is deep: loss of individuality, devaluation of the unique, responsible, engaged individual as the core element of a system of information, knowledge, and culture. The second strikes me as more superficial, or at least as more time- and space-bound. That is the concern with the rise of constructs like "hive mind" and metafilters and efforts to build business models around them.

Like Lanier, I see individuals as the bearers of moral claims and the sources of innovation, creativity, and insight. Unlike Lanier, I have argued that enhanced individual practical capabilities represent the critical long term shift introduced by the networked information economy, improving on the operation of markets and governments in the preceding century and a half. This is where I think we begin to part ways. Lanier has too sanguine a view of markets and governments. To me, markets, governments (democratic or otherwise), social relations, technical platforms are all various and partly overlapping systems within which individuals exist. They exhibit diverse constraints and affordances, and enable and disable various kinds of action for the individuals who inhabit them. Because of cost constraints and organizational and legal adaptations in the last 150 years, our information, knowledge, and cultural production system has taken on an industrial form, to the exclusion of social and peer-production. Britney Spears and American Idol are the apotheosis of that industrial information economy, not of the emerging networked information economy.

So too is the decline he decries for the New York Times. In my recent work, I have been trying to show how the networked public sphere improves upon the mass mediated public sphere along precisely the dimensions of Fourth Estate function that Lanier extolls, and how the distributed blogosphere can correct, sometimes, at least, the mass media failings. It was, after all, Russ Kick's Memory Hole, not the New York Times, that first broke pictures of military personnel brought home in boxes from Iraq. It was one activist, Bev Harris with her website blackboxvoting, an academic group led by Avi Rubin, a few Swarthmore students, and a network of thousands who replicated the materials about Diebold voting machines after 2002 that led to review and recall of many voting machines in California and Maryland. The mainstream media, meanwhile, sat by, dutifully repeating the reassurances of officials who bought the machines and vendors who sold them. Now, claims that the Internet democratizes are old, by now.

Going beyond the 1990s naive views of democracy in cyberspace, on the one hand, and the persistent fears of fragmentation and the rise of Babel, on the other hand, we can now begin to interpret the increasing amoung of data we have on our behavior on the the Web and in the blogsphere. What we see in fact is that we are not intellectual lemmings. We do not meander about in the intellectual equivalent of Brownian motion. We cluster around topics we care about. We find people who care about similar issues. We talk. We link. We see what others say and think. And through our choices we develop a different path for determining what issues are relevant and salient, through a distributed system that, while imperfect, is less easily corrupted than the advertising supported media that dominated the twentieth century.

Wikipedia captures the imagination not because it is so perfect, but because it is reasonably good in many cases: a proposition that would have been thought preposterous a mere half-decade ago. The fact that it is now compared not to the mainstream commercial encyclopedias like Grollier's, Encarta, or Columbia, but to the quasi-commercial, quasi-professional gold standard of the Britannica is itself the amazing fact. It is, after all, the product of tens of thousands of mostly well-intentioned individuals, some more knowledgeable than others, but almost all flying in the face of homo economicus and the Leviathan combined. Wikipedia is not faceless, by an large. Its participants develop, mostly, persistent identities (even if not by real name) and communities around the definitions.

They may not be a perfect complete replacement for Britannica. But they are an alternative, with different motivations, accreditation, and organization. They represent a new solution space to a set of information production problems that we need to experiment with, learn, and develop; but which offers a genuinely alternative form of production than markets, firms, or governments, and as such an uncorrelated or diverse system of action in the information environment. Improvements in productivity and freedom inhere in this diversity of systems available for human action, not in a generalized claim of superiority for one of these systems over all the others under all conditions.

This leaves the much narrower set of moves that are potentially the legitimate object of Lanier's critique: efforts that try to depersonalize the "wisdom of crowds," unmooring it from the individuals who participate; try to create ever-higher-level aggregation and centralization in order to "capture" that "wisdom;" or imagine it as emergent in the Net, abstracted from human minds. I'm not actually sure there is anyone who genuinely holds this hyperbolic a version of this view. I will, in any event, let others defend it if they do hold such a view.

Here I will only note that the centralized filters Lanier decries are purely an effort to recreate price-like signaling in a context — information in general, and digital networks in particular — where the money-based price system is systematically disfunctional. It may be right or wrongheaded; imperfect or perfect. But it is not collectivism.

Take Google's algorithm. It aggregates the distributed judgments of millions of people who have bothered to host a webpage. It doesn't take any judgment, only those that people care enough about to exert effort to insert a link in their own page to some other page. In other words, relatively "scarce" or "expensive" choices. It doesn't ask the individuals to submerge their identity, or preferences, or actions in any collective effort. No one spends their evenings in consensus-building meetings. It merely produces a snapshot of how they spend their scarce resources: time, web-page space, expectations about their readers' attention. That is what any effort to synthesize a market price does. Anyone who claims that they have found transcendent wisdom in the pattern emerging from how people spend their scarce resources is a follower of Milton Friedman, not of Chairman Mao.

At that point, Lanier's critique could be about the way in which markets of any form quash individual creativity and unique expression; it might be about how excessive layers of filtering degrade the quality of information extracted from people's behavior with their scarce resources, so that these particular implementations are poor market-replacement devices. In either case, his lot is with those of us who see the emergence of social production and peer production as an alternative to both state-based and market-based, closed, proprietary systems, which can enhance creativity, productivity, and freedom.

To conclude: The spin of Lanier's piece is wrong. Much of the substance is useful. The big substantive limitation I see is his excessively rosy view of the efficacy of the price system in information production. Networked-based, distributed, social production, both individual and cooperative, offers a new system, alongside markets, firms, governments, and traditional non-profits, within which individuals can engage in information, knowledge, and cultural production. This new modality of production offers new challenges, and new opportunities. It is the polar opposite of Maoism. It is based on enhanced individual capabilities, employing widely distributed computation, communication, and storage in the hands of individuals with insight, motivation, and time, and deployed at their initiative through technical and social networks, either individually or in loose voluntary associations.


CLAY SHIRKY
Social & Technology Network Topology Researcher; Adjunct Professor, NYU Graduate School of Interactive Telecommunications Program (ITP)

Jaron Lanier is certainly right to look at the downsides of collective action. It's not a revolution if nobody loses, and in this case, expertise and iconoclasm are both relegated by some forms of group activity. However, "Digital Maoism" mischaracterizes the present situation in two ways. The first is that the target of the piece, the hive mind, is just a catchphrase, used by people who don't understand how things like Wikipedia really work. As a result, criticism of the hive mind becomes similarly vague. Second, the initial premise of the piece — there are downsides to collective production of intellectual work — gets spread it so widely that it comes to cover RSS aggregators, American Idol, and the editorial judgment of the NY Times. These are errors of overgeneralization; it would be good to have a conversation about Wikipedia's methods and governance, say, but that conversation can't happen without talking about its actual workings, nor can it happen if it is casually lumped together with other, dissimilar kinds of group action.

The bigger of those two mistakes appears early: "The problem I am concerned with here is not the Wikipedia in itself. It's been criticized quite a lot, especially in the last year, but the Wikipedia is just one experiment that still has room to change and grow. [...] No, the problem is in the way the Wikipedia has come to be regarded and used; how it's been elevated to such importance so quickly." Curiously, the ability of the real Wikipedia to adapt to new challenges is taken at face value. The criticism is then directed instead at people proclaiming Wikipedia as an avatar of a golden era of collective consciousness. Let us stipulate that people who use terms like hive mind to discuss Wikipedia and other social software are credulous at best, and that their pronouncements tend towards caricature. What "Digital Maoism" misses is that Wikipedia doesn't work the way those people say it does.

Neither proponents nor detractors of hive mind rhetoric have much interesting to say about Wikipedia itself, because both groups ignore the details. As Fernanda Viegas's work shows, Wikipedia isn't an experiment in anonymous collectivist creation; it is a specific form of production, with its own bureaucratic logic and processes for maintaining editorial control. Indeed, though the public discussions of Wikipedia often focus on the 'everyone can edit' notion, the truth of the matter is that a small group of participants design and enforce editorial policy through mechanisms like the Talk pages, lock protection, article inclusion voting, mailing lists, and so on. Furthermore, proposed edits are highly dependant on individual reputation — anonymous additions or alterations are subjected to a higher degree of both scrutiny and control, while the reputation of known contributors is publicly discussed on the Talk pages.

Wikipedia is best viewed as an engaged community that uses a large and growing number of regulatory mechanisms to manage a huge set of proposed edits. "Digital Maoism" specifically rejects that point of view, setting up a false contrast with open source projects like Linux, when in fact the motivations of contributors are much the same. With both systems, there are a huge number of casual contributors and a small number of dedicated maintainers, and in both systems part of the motivation comes from appreciation of knowledgeable peers rather than the general public. Contra Lanier, individual motivations in Wikipedia are not only alive and well, it would collapse without them.

"The Digital Maoism" argument is further muddied by the other systems dragged in for collectivist criticism. There's the inclusion of American Idol, in which a popularity contest is faulted for privileging popularity. Well, yes, it would, wouldn't it, but the negative effects here don't come from some new form of collectivity, they come from voting, a tool of fairly ancient provenance. Decrying Idol's centrality is similarly misdirected. This season's final episode was viewed by roughly a fifth of the country. By way of contrast, the final episode of M*A*S*H was watched by three fifths of the country. The centrality of TV, and indeed of any particular medium, has been in decline for three decades. If the pernicious new collectivism is relying on growing media concentration, we're safe.

Popurls.com is similarly and oddly added to the argument, but there is in fact no meta-collectivity algorithm at work here — Popurls just an aggregation of RSS feeds. You might as well go after my.yahoo if that's the kind of thing that winds you up. And the ranking systems that are aggregated all display different content, suggesting real subtleties in the interplay of algorithm and audience, rather than a homogenizing hive mind at work. You wouldn't know it, though, to read the broad-brush criticism of Popurls here. And that is the missed opportunity of "The Digital Maoism": there are things wrong with RSS aggregators, ranking algorithms, group editing tools, and voting, things we should identify and try to fix. But the things wrong with voting aren't wrong with editing tools, and the things wrong with ranking algorithms aren't wrong with aggregators. To take the specific case of Wikipedia, the Seigenthaler/Kennedy debacle catalyzed both soul-searching and new controls to address the problems exposed, and the controls included, inter alia, a greater focus on individual responsibility, the very factor "Digital Maoism" denies is at work.

The changes we are discussing here are fundamental. The personal computer produced an incredible increase in the creative autonomy of the individual. The internet has made group forming ridiculously easy. Since social life involves a tension between individual freedom and group participation, the changes wrought by computers and networks are therefore in tension. To have a discussion about the plusses and minuses of various forms of group action, though, is going to require discussing the current tools and services as they exist, rather than discussing their caricatures or simply wishing that they would disappear.


CORY DOCTOROW
Science fiction novelist, Blogger, Technology activist; Co-Editor, Boing Boing (boingboing.net)

Where Jaron Lanier sees centralization, I see decentralization. Wikipedia is notable for lots of reasons, but the most interesting one is that Wikipedia — a genuinely useful information resource of great depth and breadth — was created in almost no time, for almost no cost, by people who had no access to the traditional canon.

We're bad futurists, we humans. We're bad at predicting what will be important and useful tomorrow. We think the telephone will be best used to bring opera to America's living rooms. We set out nobly to make TV into an educational medium. We create functional hypertext to facilitate the sharing of draft physics papers.

If you need to convince a gatekeeper that your contribution is worthy before you're allowed to make it, you'd better hope the gatekeeper has superhuman prescience. (Gatekeepers don't have superhuman prescience.) Historically, the best way to keep the important things rolling off the lines is to reduce the barriers to entry. Important things are a fraction of all things, and therefore, the more things you have, the more important things you'll have.

The worst judges of tomorrow's important things is today's incumbents. If you're about to creatively destroy some incumbent's business-model, that incumbent will be able to tell you all kinds of reasons why you should cut it out. Travel agents had lots of soothing platitudes about why Expedia would never fly. Remember travel agents? Wonder how that worked out for them.

The travel agents were right, of course. Trying to change your own plane tickets stinks. But Internet travel succeeds by being good at the stuff that travel agents sucked at, not good at the stuff that made travel agents great. Internet travel is great because it's cheap and always-on, because you can reclaim the "agency" (ahem) of plotting your route and seeing the timetables and because you can comparison shop in a way that was never possible before.

Wikipedia isn't great because it's like the Britannica. The Britannica is great at being authoritative, edited, expensive, and monolithic. Wikipedia is great at being free, brawling, universal, and instantaneous.

Making a million-entry encyclopedia out of photons, philosophy and peer-pressure would be impossible before the Internet's "collectivism." Wikipedia is a noble experiment in defining a protocol for organizing the individual efforts of disparate authors with conflicting agendas. Even better, it has a meta-framework — its GNU copyright license — that allows anyone else to take all that stuff and use part or all of Wikipedia to seed different approaches to the problem.

Wikipedia's voice is by no means bland, either. If you suffice yourself with the actual Wikipedia entries, they can be a little papery, sure. But that's like reading a mailing-list by examining nothing but the headers. Wikipedia entries are nothing but the emergent effect of all the angry thrashing going on below the surface.

No, if you want to really navigate the truth via Wikipedia, you have to dig into those "history" and "discuss" pages hanging off of every entry. That's where the real action is, the tidily organized palimpsest of the flamewar that lurks beneath any definition of "truth."

The Britannica tells you what dead white men agreed upon, Wikipedia tells you what live Internet users are fighting over.

The Britannica truth is an illusion, anyway. There's more than one approach to any issue, and being able to see multiple versions of them, organized with argument and counter-argument, will do a better job of equipping you to figure out which truth suits you best.

True, reading Wikipedia is a media literacy exercise. You need to acquire new skill-sets to parse out the palimpsest. That's what makes is genuinely novel. Reading Wikipedia like Britannica stinks. Reading Wikipedia like Wikipedia is mind-opening.

Free software like Ubuntu Linux and Firefox can have beautiful UIs (despite Lanier's claims) and the authors who made those UIs and their codebase surely put in that work for the egoboo and credit. But you'll never know who designed your favorite UI widget unless you learn to read the Firefox palimpsest: the source-tree.

Wikipedia doesn't supplant individual voices like those on blogs. Wikipedia contributors are often prolific bloggers, wont to talk about their work on Wikipedia in LiveJournals and Typepads and Wordpresses. Wikipedia is additive — it creates an additional resource out of the labor of those passionate users.

So Wikipedia gets it wrong. Britannica gets it wrong, too. The important thing about systems isn't how they work, it's how they fail. Fixing a Wikipedia article is simple. Participating in the brawl takes more effort, but then, that's the price you pay for truth, and it's still cheaper than starting up your own Britannica.


KEVIN KELLY
Editor-At-Large, Wired; Editor & Publisher, Cool Tools website; Author, Out of Control

The Wikipedia is all that it claims to be: a free encyclopedia created by its readers, that is, by anyone on the internet. That feat would be wonderful enough, but its origin is so peculiar, and its existence so handy, the obvious follow up question has become, is it anything else? Is the Wikipedia a template for other kinds of information, or maybe even other kinds of creative works? Is the way the Wikipedia authored a guide to the way many new things might be created? Is it something we should aim towards? Is it an proxy of what is coming in the coming century?

That's a heavy mythic load to put on something only a few years old, but it seems to have stuck. For better or worse, the Wikipedia now represents smart chaos, or bottom up power, or decentralized being, or out of control goodness, or what I seemed to have called for the lack of a better term: the hive mind. It is not the only hive mind out there. We see the web itself, and other collective entities, such as fandoms, voting audiences, link aggregators, consensus filters, opens source communities, and so on, all basking in a rising tide of loosely connected communal action.

But it doesn't take very long to discover that none of these innovations is pure hive mind, and that the supposed paragon of ad hocary — the Wikipedia — is itself far from strictly bottom-up. In fact a close inspection of Wikipedia's process reveals that it has an elite at its center, (and that it does have a center is news to most), and that there is far more deliberate design management going on than first appears.

This is why Wikipedia has worked in such a short time. The main drawback to pure unadulterated Darwinism is that it takes place in biological time — eons. The top-down design part woven deep within by Jimmy Wales and associates has allowed the Wikipedia to be smarter than pure dumb evolution would allow in a few years. It is important to remember how dumb the bottom is in essence. In biological natural selection, the prime architect is death. What's dumber than that? One binary bit.

We are too much in a hurry to wait around for a pure hive mind. Our technological systems are marked by the fact that we have introduced intelligent design into them. This is the top-down control we insert to speed and direct a system toward our goals. Every technological system, including Wikipedia, has design in it. What's new is only this: never before have we been able to make systems with as much "hive" in it as we have recently made with the Web. Until this era, technology was primarily all control, all design. Now it can be design and hive. In fact, this Web 2.0 business is chiefly the first step in exploring all the ways in which we can combine design and the hive in innumerable permutations. We are tweaking the dial in hundreds of combos: dumb writers, smart filters; smart writers, dumb filters, ad infinitum.

But if the hive mind is so dumb, why bother with it at all?

Because as dumb as it is, it is smart enough. More importantly, its brute dumbness produces the raw material that design smarts can work on. If we only listed to the hive mind, that would be stupid. But if we ignore the hive mind altogether, that is even stupider.

There's a bottom to the bottom. I hope we realize that a massive bottom-up effort will only take us part way — at least in human time. That's why it should be no surprise to anyone that over time more and more design, more and more control, more and more structure will be layered into the Wikipedia. I would guess that in 50 years a significant portion of Wikipedia articles will have controlled edits, peer review, verification locks, authentication certificates, and so on. That's all good for us readers. The fast moving frontiers will probably be as open and wild as they are now. That's also great for us.

Futhermore, I know it is heresy, but it might be that the Wikipedia model is not good for very much more than writing universal encyclopedias. Perhaps the article length is fortuitously the exactly right length for the smart mob, and maybe a book is exactly the wrong length. However while the 2006 Wikipedia process may not be the best way to make a textbook, or create the encyclopedia of all species, or dispense the news, the 2056 Wikipedia process, with far more design in it, may be.

It may be equally heretical (but not to this group) to suggest that the hive mind will write far more of our textbooks, and databases and news than anyone might believe right now.

Here's how I sum it up:

The bottom-up hive mind will always take us much further that seems possible. It keeps surprising us. In this regard, the Wikipedia truly is exhibit A, impure as it is, because it is something that is impossible in theory, and only possible in practice. It proves the dumb thing is smarter than we think. At that same time, the bottom-up hive mind will never take us to our end goal. We are too impatient. So we add design and top down control to get where we want to go.

Judged from where we start, harnessing the dumb power of the hive mind will takes us much further than we can dream. Judged from where we end up, the hive mind is not enough; we need top-down design.

Since we are only at the start of the start, it's the hive mind all the way for now.

Long live the Wikipedia!


ESTHER DYSON
Editor at Large, CNET Networks; Editor, Release 1.0; Director, PC Forum; Author, Release 2.0

I'll just be short, since I'm too busy reading hive-mind output:

I think the real argument is between voting or aggregating — where anonymous people raise or lower things in esteem by the weight of sheer numbers — vs. arguments by recognizable, individuals that answer the arguments of other individuals... The first is useful in coming up with numbers and trends and leading movements, but it's not creative in the way that evolution, for example, creates species. Evolution isn't blind voting. It works by using a grammar (of genetic materials and unfolding proteins; some biologist will correct me here for sure) to make changes that are consistent with the whole - i.e. adding two new limbs at a time, or adding muscle to support added mass. Arguments may win or lose and a consensus argument or belief may arise, but it is structured, and emerges more finely shaped than what mere voting or "collectivism" would have produced.

That's why we have representative government — in theory at least. Certain people — designated "experts" — sit together to design something that is supposed to be coherent. (That's the vision, anyway.) You can easily vote both for lower taxes and more services, but you can't design a consistent system that will deliver that.

So, to get the best results, we have people sharpening their ideas against one another rather than simply editing someone's contribution and replacing it with another. We also have a world where the contributors have identities (real or fake, but consistent and persistent) and are accountable for their words. Much like Edge, in fact.


LARRY SANGER
Co-founder, Wikipedia; Director of Collaborative Projects, Digital Universe Foundation; Director, Text Outline Project

What exactly is Jaron Lanier's thesis? His main theme is that a certain kind of collectivism is in the ascendancy, and that's a terrible thing. He decries the view that "the collective is all-wise, that it is desirable to have influence concentrated in a bottleneck that can channel the collective with the most verity and force."

I find myself agreeing with Lanier: the collectivism he describes is a terrible thing, by golly, and far too many people we admire seem to be caught up in it. But in agreeing, I find myself in a couple of paradoxes. First, surely, no one would admit to believing that the "collective is all-wise." So hasn't Lanier set up a straw man? Second, I myself am an advocate of what I call "strong collaboration," exemplified by Wikipedia, in which a work is developed not just by multiple authors, but a constantly changing battery of authors, none of which "owns" the work. So am I not myself committed, if anyone is, to believing "the collective is all-wise"?

To understand Lanier's thesis, and where I agree with it — and why it isn't a straw man — it helps to consider certain attitudes one pretty commonly finds in the likes of Wikipedia, Slashdot, and the Blogosphere generally. Let me describe something close to home. In late 2004 I publicly criticized Wikipedia for failing to respect expertise properly, to which a surprisingly large number of people replied that, essentially, Wikipedia's success has shown that "experts" are no longer needed, that a wide-ranging description of everyone's opinions is more valuable than what some narrow-minded "expert" thinks.

Slashdot's post-ranking system is another perfect example. Slashdotters simply would not stand for a system in which some hand-selected group of editors chose or promoted posts; but if the result is decided by an impersonal algorithm, then it's okay. It isn't that the Slashdotters have a rational belief that the cream will rise to the top, under the system; people use the system just because it seems fairer or more equal to them.

It's not quite right to say the "collectivists" believe that the collective is all-wise. Rather, they don't really care about getting it right as much as they care about equality.

You might notice that Lanier never bothered to refute, in his essay, the view that the collective is all-wise. That's because this view is obviously wrong. Truth and high quality generally are obviously not guaranteed by sheer numbers. But then the champions of collective opinion-making and aggregation surely don't think they are. So isn't Lanier just knocking down a straw man? I don't think so. As I take it, the substance of his point is that the aggregate views expressed by the collective are actually more valuable, in some sense, than anything produced by people designated as "experts" or "authorities."

Think about that a bit. Ultimately, I think there is a deep epistemological issue at work here. Epistemologists have a term, positive epistemic status, for the positive features that can attach to beliefs; so truth, knowledge, justification, evidence, and various other terms are all names for various kinds of positive epistemic status.

So I think we are discovering that there is a lively movement afoot that rejects the traditional kinds of positive epistemic status, and wants to replace them with, or explain them in terms of, whatever it is that the collective (i.e., a large group of people, of which one is a part) believes or endorses. We can give this view a name, for convenience: epistemic collectivism.

Epistemic collectivism is a real phenomenon; whether they admit it or not, a lot of people do place the views of the collective uppermost. People are epistemic collectivists in just the same way, and for just the same reasons, that they are abject conformists. Surely epistemic collectivism has its roots in the easy sophomoric embrace of relativism. If there is no objective truth, as so many of my old college students seemed to believe, then there is no way to make sense of the idea of expertise or of intellectual authority. Without a reality "out there," independent of us, that we can be right or wrong about, there is no way to justify placing some "experts" above the rest of us in terms of the reliability of their claims. If you're an epistemic collectivist, then it's natural to think that the experts can be overruled by the rest of us.

Now to the second paradox I mentioned earlier. How can I agree with Lanier and still promote strong collaboration? How can both I reject epistemic collectivism and yet say that Wikipedia is a great project, which I do? Well, the problem is that epistemic collectivists like Wikipedia but for the wrong reasons. What's great about it is not that it produces an averaged view, an averaged view that is somehow better than an authoritative statement by people who actually know the subject. That's just not it at all. What's great about Wikipedia is the fact that it is a way to organize enormous amounts of labor for a single intellectual purpose. The virtue of strong collaboration, as demonstrated by projects like Wikipedia, is that it represents a new kind of "industrial revolution," where what is reorganized is not techne but instead mental effort. It's the sheer efficiency of strongly collaborative systems that is so great, not their ability to produce The Truth. Just how to eke The Truth out of such a strongly collaborative system is an unsolved, and largely unaddressed, problem.

So online collaboration in some people's minds can be indistinguishable from a new collectivism, and Lanier is right both to say so and to condemn the fact. But this collectivism is inherent neither in tools, such as wikis, nor in methods, such as collaboration and aggregation.


FERNANDA VIEGAS & MARTIN WATTENBERG
Visual Communication Lab, IBM Research

The hive mind ain't what it used to be

Jaron Lanier raises important points about collectivism, yet the barbs thrown at Wikipedia seem misplaced. There's no doubt that online aggregators such as Digg, Reddit, and popurls can seem faceless to the point of being soulless. However, the irony of his critique is that Wikipedia is very much the opposite of these aggregator sites. Instead of algorithmically aggregating content, Wikipedia depends on writers settling their differences on an individual level. Nothing is created or posted automatically — and it shows.

Consider Lanier's praise for seeing "the context in which something was written" coupled with his condemnation of the "anti-contextual brew of the Wikipedia." Yet context is one of the great strengths of Wikipedia. Here's a magic trick for you: Go to a long or controversial Wikipedia page (say, "Jaron Lanier"). Click on the tab marked "discussion" at the top. Abracadabra: context!

This rich context, attached to many Wikipedia articles, is known as a "talk page." The talk page is where the writers for an article hash out their differences, plan future edits, and come to agreement about tricky rhetorical points. This kind of debate doubtless happens in the New York Times and Britannica as well, but behind the scenes. Wikipedia readers can see it all, and understand how choices were made.

This visible process can illuminate the intellectual liveliness of topics that may seem like dry fact to the casual reader. Take the talk page for "denotational semantics." In a textbook this recondite computer science concept may sound set in stone, but it comes to life when you read a sharp argument between an MIT professor and other experts over exactly what should be in the article. Moreover, these debates are full of personality and individual voice. Wikipedia etiquette dictates that most people sign their contributions to talk pages. Read the discussion pages for "Feminism" or "Chess," and you'll see a cacophony of individual voices. The hive mind ain't what it used to be.

Lanier brings up the specter of Maoism, but let's take a look at the authorial crowd in action on the "Jaron Lanier" talk page. Here is what we see: someone has pointed to Lanier's article and suggested removing the incorrect reference to filmmaking. Someone else agrees and says they'll be watching the article. A second piece of dialogue on the page ends with a signed post saying, "We should use his [Lanier's] own words when possible, especially as he objects to a lot of the article." This is not exactly a Maoist mob.

These efforts can also be seen through another arena of context: Wikipedia's visible, trackable edit history. The reverts that erased Lanier's own edits show this process in action. Clicking on the "history" tab of the article shows that a reader — identified only by an anonymous IP address — inserted a series of increasingly frustrated complaints into the body of the article. Although the remarks did include statements like "This is Jaron — really," another reader evidently decided the anonymous editor was more likely to be a vandal than the real Jaron. While Wikipedia failed this Jaron Lanier Turing test, it was seemingly set up for failure: would he expect the editors of Britannica to take corrections from a random hotmail.com email address? What he didn't provide, ironically, was the context and identity that Wikipedia thrives on. A meaningful user name, or simply comments on the talk page, might have saved his edits from the axe.

Perhaps the most interesting part of this story is that Wikipedia already had guidelines to cover this situation. The "Wikipedia: Biographies of living persons" page specifically warns that seeming vandalism of the biography of a living person might be by the subject him or herself, inexperienced with Wikipedia editing procedure. While this guideline didn't prevent the unfortunate reverts to the Jaron Lanier article, it was sufficiently well-known to be used by one of the discussants on the talk page to justify keeping Lanier's own language. The emergence of this type of shared policy is one of the fascinating developments in Wikipedia — and yet another way that Wikipedia can be distinguished from an automatic algorithm or mindless crowd.

The truth is, it can be hard to find a crowd on Wikipedia, let alone a "hive mind." Lanier decries an amorphous, anonymous collective that edits Wikipedia pages. As with the Jaron Lanier article, it is usually a small group of editors who steadily work on a given page over time. We often know who these people are (they may sign their posts on talk pages or have personal user pages). Generally speaking, the few pages on Wikipedia that are edited by a truly amorphous crowd either cover current events or are featured on the front page of Wikipedia (front page articles are highly visible). Even then, crowd editing is usually transient and, once it plummets — either because the article gets off the front page or because the event stops being featured in the news — the core group of editors typically takes over the page maintenance again.

In short, it is hard to claim that Wikipedia is built by an anonymous, mindless mob engaged in foolish collectivism. As long as critiques of Wikipedia's processes stop at the article level, they will continue to miss the point. The persistent, searchable archives provided by Wiki technology allow individual voices to survive even as consensus is reached. At the same time, there is certainly a collective will — one that may make mistakes, but also attempts to keep itself in check through emergent policies, guidelines, and elements of bureaucracy. It is this publicly available context and meta-structure that truly distinguish Wikipedia from algorithmic or market-based aggregation. It's by no means certain how stable this system is, or what it will look like in 10 years. But the fact that these processes emerge is a testament to the power of transparency and persistence — and it will be interesting to see what happens next.


JIMMY WALES
Founder and Chair of the Board of Trustees of the Wikimedia Foundation, a non-profit corporation that operates Wikipedia; Founder of the for-profit company Wikia, Inc.

"A core belief of the wiki world is that whatever problems exist in the wiki will be incrementally corrected as the process unfolds."

My response is quite simple: this alleged "core belief" is not one which is held by me, nor as far as I know, by any important or prominent Wikipedians. Nor do we have any particular faith in collectives or collectivism as a mode of writing. Authoring at Wikipedia, as everywhere, is done by individuals exercising the judgment of their own minds.

"The best guiding principle is to always cherish individuals first."

Indeed.


GEORGE DYSON
Science Historian; Author, Project Orion

This delightful and much-needed essay is the product of a brilliant individual mind at work.

However, Lanier's high-level insights are themselves the result of exactly those collective, haphazard, and noisy processes that are under criticism here. Deep within Jaron Lanier's brain, layer upon layer of anonymous neurons have cycled collectively through meta-meta-meta levels of information processing to produce the thinking he presents so coherently in words. Underlying everything from music to vision are social networks where popularity and having the right connections wins. When Lanier was in his infancy, processes similar to PageRank, AdSense, and AdWords, running (and competing) amok among billions of neurons and trillions of synapses, allowed the language, symbols, and meaning embodied in his surrounding human culture to take root. When it comes to natural intelligence, Wikipedia, not Britannica, wrote the book.

All intelligence is collective. But, as Lanier points out, that does not mean that all collectives are intelligent.

The important part of his message is a warning to respect, and preserve, our own intelligence. The dangers of relinquishing individual intelligence are real.

Lanier does not want to debate the existence or non-existence of metaphysical entities. But his argument that online collectivism produces artificial stupidity offers no reassurance to me. Real artificial intelligence (if and when) will be unfathomable to us. At our level, it may appear as dumb as American Idol, or as pointless as a nervous twitch that corrects and uncorrects Jaron Lanier's Wikipedia entry in an endless loop.


DAN GILLMOR
Founder & Director, Center for Citizen Media; Former columnist, San Jose Mercury News; Author, We the Media: Grassroots Journalism by the People, for the People

The collected thoughts from people responding to Jaron Lanier's essay are
not a hive mind, but they've done a better job of dissecting his provocative
essay than any one of us could have done. Which is precisely the point. Let me contribute a thought or two as well.

Does Lanier truly not see the historical absurdity of equating Wikipedia and other such phenomena with Maoism and collectivism? Even the most cursory examination of the Communist predations of the 20th Century makes that clear. A tendentious title and analogy undermines the many interesting facts he's assembled.

The better analogy is the old-fashioned barn-raising, where people contribute their labor for a specific purpose. It takes more than a hive to raise the barn. (I'd say it takes a village, but that's been turned into a political cliche.) People with a variety of expertise, ranging from expert to pure novice, come together to solve a problem. Leaders emerge to steer the process, and a barn happens.

It's not about an all-wise hive mind. It's not about a collective. It's about community.

It's also about persistence — and celebrating the reality that knowledge is not a static end-point but rather an ongoing process. New facts and nuances emerge after articles are published. One of Wikipedia's best characteristics is its recognition that we can liberate ourselves from the publication or broadcast metaphors from the age of literally manufactured media, where the paper product or tape for broadcasting was the end of the process. My mantra as a journalist was a simple one: My readers know more than I do. We may (should I use this word?) collectively not get it right, and in fact humans almost never get anything entirely right, but get closer the more we assemble new data and nuance. If Steven Spielberg and other Hollywood folks can create directors' cuts of their movies, why can't journalists and other creators, amateur and professional, keep updating and improving some of their own works?

Pointing out the flaws in Wikipedia seems to be a new participatory sport. Let me join for a minute; the entry about me is both incorrect in small ways and grossly out of date. I've honored the site's request that people who are the subjects of articles not fix them, but I'm definitely annoyed.

Then again, no article about me or my work in a traditional media outlet has ever been precisely correct. Factual errors, mostly minor, are common. Ditto out-of-context quotes. Yet those articles are now there — in print and even in databases, never to be updated, because the manufacturing model doesn't permit such things.

The flaws in Wikipedia and other kinds of media are real. (Disclosure: Jimmy Wales is a friend; he is on my advisory board; and I'm an investor in his for-profit company.) But ways it shows us how to improve, along with watching how the community (not collective) operates around individual articles and the project as a whole, are lessons in themselves.

The debate does demonstrate how much we need to update our media literacy in a digital, distributed era. Our internal BS meters already work, but they've fallen into a low and sad level of use in the Big Media world. Many people tend to believe what they read. Others tend to disbelieve everything. Too few apply appropriate skepticism and do the additional work that true media literacy requires.

We need better tools to help us, as a community, gauge the reliability and authenticity of what we find online (or in print or on the air, etc.). Popularity is only one measure. Reputation has to become part of the mix in systems that combine human and machine intelligence in novel ways.

What's most essential, though, is to remember how early we are in this process. Wikipedia isn't the ultimate authority. It is, however, a remarkable achievement. And it's getting better. I look forward to seeing how it proceeds.


HOWARD RHEINGOLD
Communications Expert; Author, Smart Mobs

I agree that new notions about collective intelligence and peer production should be viewed critically and not embraced in a spirit of magical thinking — but I find it strange that someone as educated as Jaron should fall into the same simple fallacy the Cato Institute fell for: collective action is not the same as collectivism. Commons-based peer production in Wikipedia, open source software, and prediction markets is collective action, not collectivism. Collective action involves freely chosen self-election (which is almost always coincident with self-interest) and distributed coordination; collectivism involves coercion and centralized control; treating the Internet as a commons doesn't mean it is communist (tell that to Bezos, Yang, Filo, Brin or Page, to name just a few billionaires who managed to scrape together private property from the Internet commons).


Return to "Digital Maoism: The Hazards of the New Online Collectivism" By Jaron Lanier


feature

On "Gödel in a Nutshell " By Verena Huber-Dyson

STEPHEN BUDIANSKY
Correspondent for The Atlantic Monthly and the author of seven highly acclaimed books about history, science, and nature.

Verena Huber-Dyson I think misses some important considerations in her extrapolations from Gödel's incompleteness theorem to human mental types. She suggests that there are three types: those who authoritarian-minded, who demand completeness and skip over inconsistencies; those who are scientific, who panic to the point of going mad in the face of inconsistency; and then the mass of unimaginative mankind who are blithely unaware of either incompleteness or inconsistency.

But this is not at all my experience, certainly not when it comes to scientists; and furthermore I would venture to say that this typology rather fuzzes up what Gödel's theorem really does imply.

I would take as a starting point the interesting fact that philosophers and pure mathematicians are vastly more impressed by the implications of Gödel's theorem than are scientists or applied mathematicians. And the reason for this is not hard to find. Although it is in theory an exciting (and potentially very disturbing) discovery that no system of logic (or language, or mathematics) can be both complete and consistent, the practical consequences of this in everyday life — even everyday scientific life — are virtually nil. One simply does not encounter irreducible, show-stopping inconsistencies in language or logic or science. The examples that one can come up with of paradoxes that appear to be true, or that cannot be resolved, are uniformly contrived, extreme, artificial, and by their very nature unlike anything one routinely encounters — in fact, they were invariably dreamed up with the express purpose of being paradoxical (such as the famous "liar's paradoxes" of the "the statement on the other side of this paper is false" variety). Even Gödel's proof itself, for all of its brilliance, has the air of a parlor trick about it — it's a bootstrap pulling up a bootstrap, a sort of Rube Goldberg contraption devised solely for the purpose of proving that something (however useless a formulation that something may be in itself) may be simultaneously true and unprovable.

Wittgenstein for one thought there was something highly "suspicious" about the way logical contradictions could be made to arise. But more to the point, as far as I can tell, there isn't a scientist in the world who goes around holding his head and moaning about the possibility that two contradictory things must be simultaneously true, or fretting that life, the universe, and everything must therefore be meaningless. Rather, they find that they can apply rigorous logic just fine, thank you very much, and use it to come up with solid and useful conclusions, and there's plenty of interesting things worth studying without even worrying about the extreme possibilities. As Matt Ridley mentions in his fascinating new biography of Francis Crick, Crick thought philosophers ridiculous precisely on these grounds Ð that they invariably focused on unrealistic situations and ignored empirical data. And, as the philosopher Robert Fogelin argues in his book "Walking the Tightrope of Reason," in fact scientists have long engaged in a sort of intellectual triage in which they happily ignore, and — more important — are supremely untroubled by, far-fetched possibilities, and simply get on with the job.

If what Huber-Dyson means is that scientists are troubled by apparent inconsistencies, she is right. But such inconsistencies are not "real"; they are not inconsistencies of the kind predicted by Gödel's theorem: they are rather simply indications that some fact is missing, some further explanation yet undeveloped; they see that as a spur to action (not a cause of "panic," as Huber-Dyson suggests), for they are confident (no panic necessary) that the apparent contradiction will vanish once they have dug deeply enough into the matter. And again, in the practical world, they are right, for again as far as I know no one has yet run into the kind of hard and baffling limits on knowability or consistency that Gödel's theorem implies may exist somewhere out there.

Huber-Dyson suggests that those with an authoritarian mindset want answers to everything and skip over inconsistency, but I think it is playing a bit loose with semantics to use "inconsistency" here in the same way it is meant as an implication of Gödel's theorem. What such types are really skipping over is logic itself: they ignore evidence that contradicts their chosen biases; they come up with "self-sealing" arguments and are masters of special pleading. It is not that they have chosen completeness over consistency in a "Gödelian" sense; it is that they are ignoring empirical evidence that challenges their conclusions. That is something entirely different (it is what Robert Fogelin categorizes as "pigheadedness").

I would suggest, in summary, that there is a tendency to try to milk far too much out of Gödel's theorem when it comes to philosophical explanations. Much the same can be said of other scientific "big ideas" that are the favorites of those who indulge in what used to be called cocktail-party physics but now might be better called new-age physics. (I think there are now even self-help and business advice books that purport to extract from Heisenberg's uncertainty principle some lessons about managing one's love life or business career.) The temptation is hard to resist, admittedly, even for people who ought to know better. The best illustration of this was the fate that befell one faculty member at Princeton who got the big idea that maybe there was some profound connection between Gödel's incompleteness theorem and Heisenberg's Uncertainty Principle. Both, after all seemed to suggest inherent limitations in the universe on what it is possible for human beings to know. "Well, one day I was at the Institute of Advanced Study, and I went to Gödel's office, and there was Gödel," the professor recalled. "I said, 'Professor Gödel, what connection do you see between your incompleteness theorem and Heisenberg's uncertainty principle?' And Gödel got angry and threw me out of his office."


news


Jaron Lanier on the stupidity of the hive mind
By Jack Schofield
[5.31.06]

Jaron Lanier, who more or less invented virtual reality in the 1980s (making me a lifelong Lanier fan), has published a fascinating Edge essay on Digital Maosim: The Hazards of the New Online Collectivism.

The opening gambit is: "The hive mind is for the most part stupid and boring. Why pay attention to it?" What he is pointing to is the collective output exemplified by Wikipedia etc, meta-sources of informaiton such as Google, and meta-meta-meta sources such as (in increasing order of meta-ness), Boing Boing, Digg and Popurls.

It's not hard to see why the fallacy of collectivism has become so popular in big organizations: If the principle is correct, then individuals should not be required to take on risks or responsibilities. We live in times of tremendous uncertainties coupled with infinite liability phobia, and we must function within institutions that are loyal to no executive, much less to any lower level member. Every individual who is afraid to say the wrong thing within his or her organization is safer when hiding behind a wiki or some other Meta aggregation ritual.

I've participated in a number of elite, well-paid wikis and Meta-surveys lately and have had a chance to observe the results. I have even been part of a wiki about wikis. What I've seen is a loss of insight and subtlety, a disregard for the nuances of considered opinions, and an increased tendency to enshrine the official or normative beliefs of an organization. Why isn't everyone screaming about the recent epidemic of inappropriate uses of the collective? It seems to me the reason is that bad old ideas look confusingly fresh when they are packaged as technology.

Why do we do it? As Lanier points out later:

It's safer to be the aggregator of the collective. You get to include all sorts of material without committing to anything. You can be superficially interesting without having to worry about the possibility of being wrong.

COMMENT: Edge is based on the idea of accumulating the knowledge of a very small number of the world's smartest people -- more or less the opposite of Google or Wikipedia.

 



Ideas: Intelligent Defense
By Jerry Adler
[5.29.06]

Why, of all the assertions of modern science, does evolution by natural selection attract the most dissent? As the philosopher Daniel Dennett points out, Darwin's theory is no more implausible than the claim by quantum mechanics that an electron can appear to be in two places at once, yet physicists don't have to endlessly explain and justify their theories to a skeptical public. Dennett's answer is that natural selection, "by executing God's traditional task of designing and creating all creatures great and small, also seems to deny one of the best reasons we have for believing in God's existence." Which should leave no one in doubt about the source of the attack on Darwinism in the guise of intelligent design: it comes from religion.

The intelligent-design movement suffered a political setback last December when a federal judge ordered a Pennsylvania school district to stop talking about it in high school, but it lives on as an idea, to the bemusement and occasional frustration of most serious scientists. Sixteen of them, including Dennett, contributed essays in defense of evolution to a small anthology called "Intelligent Thought," published last week. It was compiled by John Brockman, better known as the editor of the Web site edge.org, the thinking man's Drudge Report. Evolutionary biologist Richard Dawkins deconstructs the claim by ID proponents that the "designer" could be an intelligent alien rather than God, and psychologist Steven Pinker shows how moral sensibility can arise by way of natural selection. "Evolutionary biology certainly hasn't explained everything that perplexes biologists," Dennett concludes, "but Intelligent Design hasn't yet tried to explain anything at all."


|Top|