And this is the other thing about the size of the cognitive surplus we're talking about. It's so large that even a small change could have huge ramifications. Let's say that everything stays 99 percent the same, that people watch 99 percent as much television as they used to, but 1 percent of that is carved out for producing and for sharing. The Internet-connected population watches roughly a trillion hours of TV a year. That's about five times the size of the annual U.S. consumption. One per cent of that  is 98 Wikipedia projects per year worth of participation.

I think that's going to be a big deal. Don't you?

GIN, TELEVISION, AND COGNITIVE SURPLUS [8.21.08]
A Talk by Clay Shirky ()

Introduction
By John Brockman

Reporting on the recent Edge Master Class 08 in Sonoma, George Dyson wrote:

Retreating to the luxury of Sonoma to discuss economic theory in mid-2008 conveys images of Fiddling while Rome Burns. Do the architects of Microsoft, Amazon, Google, PayPal, and Facebook have anything to teach the behavioral economists—and anything to learn? So what? What's new?? As it turns out, all kinds of things are new.

"All kinds of things are new", and something very big is in the air. According to Sean Parker, the cofounder of Napster, Plaxo, and Facebook (as well as Facebook's founding president) who was present in Sonoma. "If you're not on Facebook, you don't exist".

Social software has arrived, and if you don't pay attention and take onboard the developments at Google, Twitter, Facebook, Wikipedia, etc., you are opting out of being a serious player in the realm of 21st Century ideas.

One of the more interesting contributions to the 2008 Edge World Question Center event was by Tim O'Reilly, the always-innovative guru, entrepreneur, publisher/evangelist of Web 2.0 social software revolution. In his piece (below), O'Reilly writes about his initial skepticism regarding Clay Shirky's 2002 vision of "social software". These comments are an infomative preamble to a recent talk in which Shirky coins the phrase "cognitive surplus".

According to Shirky:

Starting after the second world war, a whole host of factors, like rising GDP, rising educational attainment, and rising life-span, forced the industrialized world to grapple with something new: free time. Lots and lots of free time. The amount of unstructured time among the educated population ballooned, accounting for billions of hours a year. And what did we do with that time? Mostly, we watched TV.

Society never really knows what do do with any surplus at first. (That's what makes it a surplus.) In this case, we had to find something to do with the sudden spike in surplus hours. The sitcom was our gin, a ready-made response to the crisis of free time. TV has become a half-time job for most citizens of the industrialized world, at an average of 20 hours a week, every week, for decades.

Now, though, for the first time in its history, young people are watching less TV than their elders, and the cause of the decline is competition for their free time from media that allow for active and social participation, not just passive and individual consumption.

The value in media is no longer in sources but in flows; when we pool our cognitive surplus, it creates value that doesn't exist when we operate in isolation. The displacement of TV watching is coming among people who are using more of their time to make things and do things, sometimes alone and sometimes together, and to share those things with others.

When Shirky first made this assertion at a tech conference, he was astonished to see the video of the speech rocket around the web faster and more broadly than anything else he had ever said or done.

Shirky believes that "we can take advantage of our cognitive surplus, but only if we start regarding pure consumption as an anomaly, and broad participation as the norm. This not a dispassionate argument, because the stakes are so high. We don't get to decide whether we want a new society. The changes we are under can't be rolled back, nor contained in the present institutional frameworks. What we might get to decide is how we want this change to turn out."

"To call the current opportunity 'once in a lifetime'", he continues, "understates its enormity; the change in the social landscape is altering institutions that have been stable for generations, and making possible new kinds of human engagement that have never existed before. The results could be a marvel, or a catastrophe, depending on how seriously we try to shape what's possible."

If you want new, and original thinking, look no further.

Edge is pleased to present the video and transcript of Shirky's talk below with the hope that an ensuing Reality Club discussion will further sharpen the argument.

JB

CLAY SHIRKY is an adjunct professor in NYU's graduate Interactive Telecommunications Program (ITP), where he teaches courses on the interrelated effects of social and technological network topology—how our networks shape culture and vice-versa. He is the author of Here Comes Everybody.

Clay Shirky's Edge Bio page

THE REALITY CLUB: Nicholas Carr, Chris Anderson, James O'Donnell


TIM O'REILLY
Founder and CEO of O'Reilly Media, Inc.

I was skeptical of the term "social software"....

In November 2002, Clay Shirky organized a "social software summit," based on the premise that we were entering a "golden age of social software... greatly extending the ability of groups to self-organize."

I was skeptical of the term "social software" at the time. The explicit social software of the day, applications like friendster and meetup, were interesting, but didn't seem likely to be the seed of the next big Silicon Valley revolution.

I preferred to focus instead on the related ideas that I eventually formulated as "Web 2.0," namely that the internet is displacing Microsoft Windows as the dominant software development platform, and that the competitive edge on that platform comes from aggregating the collective intelligence of everyone who uses the platform. The common thread that linked Google's PageRank, ebay's marketplace, Amazon's user reviews, Wikipedia's user-generated encyclopedia, and CraigsList's self-service classified advertising seemed too broad a phenomenon to be successfully captured by the term "social software." (This is also my complaint about the term "user generated content.") By framing the phenomenon too narrowly, you can exclude the exemplars that help to understand its true nature. I was looking for a bigger metaphor, one that would tie together everything from open source software to the rise of web applications.

You wouldn't think to describe Google as social software, yet Google's search results are profoundly shaped by its collective interactions with its users: every time someone makes a link on the web, Google follows that link to find the new site. It weights the value of the link based on a kind of implicit social graph (a link from site A is more authoritative than one from site B, based in part on the size and quality of the network that in turn references either A or B). When someone makes a search, they also benefit from the data Google has mined from the choices millions of other people have made when following links provided as the result of previous searches.

You wouldn't describe ebay or Craigslist or Wikipedia as social software either, yet each of them is the product of a passionate community, without which none of those sites would exist, and from which they draw their strength, like Antaeus touching mother earth. Photo sharing site Flickr or bookmark sharing site del.icio.us (both now owned by Yahoo!) also exploit the power of an internet community to build a collective work that is more valuable than could be provided by an individual contributor. But again, the social aspect is implicit — harnessed and applied, but never the featured act.

Now, five years after Clay's social software summit, Facebook, an application that explicitly explores the notion of the social network, has captured the imagination of those looking for the next internet frontier. I find myself ruefully remembering my skeptical comments to Clay after the summit, and wondering if he's saying "I told you so."

Mark Zuckerberg, Facebook's young founder and CEO, woke up the industry when he began speaking of "the social graph" — that's computer-science-speak for the mathematical structure that maps the relationships between people participating in Facebook — as the core of his platform. There is real power in thinking of today's leading internet applications explicitly as social software.

Mark's insight that the opportunity is not just about building a "social networking site" but rather building a platform based on the social graph itself provides a lens through which to re-think countless other applications. Products like xobni (inbox spelled backwards) and MarkLogic's MarkMail explore the social graph hidden in our email communications; Google and Yahoo! have both announced projects around this same idea. Google also acquired Jaiku, a pioneer in building a social-graph enabled address book for the phone.

This is not to say that the idea of the social graph as the next big thing invalidates the other insights I was working with. Instead, it clarifies and expands them:

Massive collections of data and the software that manipulates those collections, not software alone, are the heart of the next generation of applications.
The social graph is only one instance of a class of data structure that will prove increasingly important as we build applications powered by data at internet scale. You can think of the mapping of people, businesses, and events to places as the "location graph", or the relationship of search queries to results and advertisements as the "question-answer graph."

The graph exists outside of any particular application; multiple applications may explore and expose parts of it, gradually building a model of relationships that exist in the real world.

As these various data graphs become the indispensable foundation of the next generation "internet operating system," we face one of two outcomes: either the data will be shared by interoperable applications, or the company that first gets to a critical mass of useful data will become the supplier to other applications, and ultimately the master of that domain.

So have I really changed my mind? As you can see, I'm incorporating "social software" into my own ongoing explanations of the future of computer applications.

It's curious to look back at the notes from that first Social Software summit. Many core insights are there, but the details are all wrong. Many of the projects and companies mentioned have disappeared, while the ideas have moved beyond that small group of 30 or so people, and in the process have become clearer and more focused, imperceptibly shifting from what we thought then to what we think now.

Both Clay, who thought then that "social software" was a meaningful metaphor and I, who found it less useful then than I do today, have changed our minds. A concept is a frame, an organizing principle, a tool that helps us see. It seems to me that we all change our minds every day through the accretion of new facts, new ideas, new circumstances. We constantly retell the story of the past as seen through the lens of the present, and only sometimes are the changes profound enough to require a complete repudiation of what went before.

Ideas themselves are perhaps the ultimate social software, evolving via the conversations we have with each other, the artifacts we create, and the stories we tell to explain them.

Yes, if facts change our mind, that's science. But when ideas change our minds, we see those facts afresh, and that's history, culture, science, and philosophy all in one.


TIM O'REILLY is the founder and CEO of O'Reilly Media, Inc., one of the leading computer book publishers in the world. O'Reilly Media also hosts conferences on technology topics, including the the Web 2.0 Summit, the Web 2.0 Expo, the O'Reilly Open Source Convention, and the O'Reilly Emerging Technology Conference. O'Reilly's blog, the O'Reilly Radar, "watches the alpha geeks".

Tom O'Reilly's Edge Bio page


GIN, TELEVISION, AND COGNITIVE SURPLUS
A Talk By Clay Shirky


I was recently reminded of some reading I did in college, way back in the last century, by a British historian arguing that the critical technology, for the early phase of the industrial revolution, was gin.

The transformation from rural to urban life was so sudden, and so wrenching, that the only thing society could do to manage was to drink itself into a stupor for a generation. The stories from that era are amazing—there were gin pushcarts working their way through the streets of London.

And it wasn't until society woke up from that collective bender that we actually started to get the institutional structures that we associate with the industrial revolution today. Things like public libraries and museums, increasingly broad education for children, elected leaders—a lot of things we like—didn't happen until having all of those people together stopped seeming like a crisis and started seeming like an asset.

It wasn't until people started thinking of this as a vast civic surplus, one they could design for rather than just dissipate, that we started to get what we think of now as an industrial society.

If I had to pick the critical technology for the 20th century, the bit of social lubricant without which the wheels would've come off the whole enterprise, I'd say it was the sitcom. Starting with the Second World War a whole series of things happened—rising GDP per capita, rising educational attainment, rising life expectancy and, critically, a rising number of people who were working five-day work weeks. For the first time, society forced onto an enormous number of its citizens the requirement to manage something they had never had to manage before—free time.

And what did we do with that free time? Well, mostly we spent it watching TV.

We did that for decades. We watched I Love Lucy. We watched Gilligan's Island. We watch Malcolm in the Middle. We watch Desperate Housewives. Desperate Housewives essentially functioned as a kind of cognitive heat sink, dissipating thinking that might otherwise have built up and caused society to overheat.

And it's only now, as we're waking up from that collective bender, that we're starting to see the cognitive surplus as an asset rather than as a crisis. We're seeing things being designed to take advantage of that surplus, to deploy it in ways more engaging than just having a TV in everybody's basement.

This hit me in a conversation I had about two months ago. I've finished a book called Here Comes Everybody, which has recently come out, and this recognition came out of a conversation I had about the book. I was being interviewed by a TV producer to see whether I should be on their show, and she asked me, "What are you seeing out there that's interesting?"

I started telling her about the Wikipedia article on Pluto. You may remember that Pluto got kicked out of the planet club a couple of years ago, so all of a sudden there was all of this activity on Wikipedia. The talk pages light up, people are editing the article like mad, and the whole community is in an ruckus —"How should we characterize this change in Pluto's status?" And a little bit at a time they move the article—fighting offstage all the while—from, "Pluto is the ninth planet," to "Pluto is an odd-shaped rock with an odd-shaped orbit at the edge of the solar system."

So I tell her all this stuff, and I think, "Okay, we're going to have a conversation about authority or social construction or whatever." That wasn't her question. She heard this story and she shook her head and said, "Where do people find the time?" That was her question. And I just kind of snapped. And I said, "No one who works in TV gets to ask that question. You know where the time comes from. It comes from the cognitive surplus you've been masking for 50 years."

So how big is that surplus? If you take Wikipedia as a kind of unit, all of Wikipedia, the whole project—every page, every edit, every line of code, in every language Wikipedia exists in—that represents something like the cumulation of 98 million hours of human thought. I worked this out with Martin Wattenberg at IBM; it's a back-of-the-envelope calculation, but it's the right order of magnitude, about 98 million hours of thought.

And television watching? Two hundred billion hours, in the U.S. alone, every year. Put another way, now that we have a unit, that's 2,000 Wikipedia projects a year spent watching television. Or put still another way, in the U.S., we spend 98 million hours every weekend, just watching the ads. This is a pretty big surplus. People asking, "Where do they find the time?" when they're looking at things like Wikipedia don't understand how tiny that entire project is, as a carve-out of  the cognitive surplus that's finally being dragged into what Tim O'Reilly calls an architecture of participation.

Now, the interesting thing about a surplus like that is that society doesn't know what to do with it at first—hence the gin, hence the sitcoms. Because if people knew what to do with a surplus with reference to the existing social institutions, it wouldn't be a surplus, would it? It's precisely when no one has any idea how to deploy something that people have to start experimenting with it, in order for the surplus to get integrated, and the course of that integration can transform society.

The early phase for taking advantage of this cognitive surplus, the phase I think we're still in, is all special cases. The physics of participation is much more like the physics of weather than it is like the physics of gravity. We know all the forces that combine to make these kinds of things work: there's an interesting community over here, there's an interesting sharing model over there, those people are collaborating on open source software. But despite knowing the inputs, we can't predict the outputs yet because there's so much complexity.

The way you explore complex ecosystems is you just try lots and lots and lots of things, and you hope that everybody who fails fails informatively so that you can at least find a skull on a pikestaff near where you're going. That's the phase we're in now.

Just to pick one example, one I'm in love with, but it's tiny. A couple of weeks one of my students at ITP forwarded me a a project started by a professor in Brazil, in Fortaleza, named Vasco Furtado. It's a Wiki Map for crime in Brazil. If there's an assault, if there's a burglary, if there's a mugging, a robbery, a rape, a murder, you can go and put a push-pin on a Google Map, and you can characterize the assault, and you start to see a map of where these crimes are occurring.

Now, this already exists as tacit information. Anybody who knows a town has some sense of, "Don't go there. That street corner is dangerous. Don't go in this neighborhood. Be careful there after dark." But it's something society knows without society really knowing it, which is to say there's no public source where you can take advantage of it. And the cops, if they have that information, they're certainly not sharing. In fact, one of the things Furtado says in starting the Wiki crime map was, "This information may or may not exist some place in society, but it's actually easier for me to try to rebuild it from scratch than to try and get it from the authorities who might have it now."

Maybe this will succeed or maybe it will fail. The normal case of social software is still failure; most of these experiments don't pan out. But the ones that do are quite incredible, and I hope that this one succeeds, obviously. But even if it doesn't, it's illustrated the point already, which is that someone working alone, with really cheap tools, has a reasonable hope of carving out enough of the cognitive surplus, enough of the desire to participate, enough of the collective goodwill of the citizens, to create a resource you couldn't have imagined existing even five years ago.

So that's the answer to the question, "Where do they find the time?" Or, rather, that's the numerical answer. But beneath that question was another thought, this one not a question but an observation. In this same conversation with the TV producer I was talking about World of Warcraft guilds, and as I was talking, I could sort of see what she was thinking: "Losers. Grown men sitting in their basement pretending to be elves."

At least they're doing something.

Did you ever see that episode of Gilligan's Island where they almost get off the island and then Gilligan messes up and then they don't? I saw that one. I saw that one a lot when I was growing up. And every half-hour that I watched that was a half an hour I wasn't posting at my blog or editing Wikipedia or contributing to a mailing list. Now I had an ironclad excuse for not doing those things, which is none of those things existed then. I was forced into the channel of media the way it was because it was the only option. Now it's not, and that's the big surprise. However lousy it is to sit in your basement and pretend to be an elf, I can tell you from personal experience it's worse to sit in your basement and try to figure if Ginger or Mary Ann is cuter.

And I'm willing to raise that to a general principle. It's better to do something than to do nothing. Even lolcats, even cute pictures of kittens made even cuter with the addition of cute captions, hold out an invitation to participation. When you see a lolcat, one of the things it says to the viewer is, "If you have some sans-serif fonts on your computer, you can play this game, too." And that's message—I can do that, too—is a big change.

This is something that people in the media world don't understand. Media in the 20th century was run as a single race—consumption. How much can we produce? How much can you consume? Can we produce more and you'll consume more? And the answer to that question has generally been yes. But media is actually a triathlon, it 's three different events. People like to consume, but they also like to produce, and they like to share.

And what's astonished people who were committed to the structure of the previous society, prior to trying to take this surplus and do something interesting, is that they're discovering that when you offer people the opportunity to produce and to share, they'll take you up on that offer. It doesn't mean that we'll never sit around mindlessly watching Scrubs on the couch. It just means we'll do it less.

And this is the other thing about the size of the cognitive surplus we're talking about. It's so large that even a small change could have huge ramifications. Let's say that everything stays 99 percent the same, that people watch 99 percent as much television as they used to, but 1 percent of that is carved out for producing and for sharing. The Internet-connected population watches roughly a trillion hours of TV a year. That's about five times the size of the annual U.S. consumption. One per cent of that  is 98 Wikipedia projects per year worth of participation.

I think that's going to be a big deal. Don't you?

Well, the TV producer did not think this was going to be a big deal; she was not digging this line of thought. And her final question to me was essentially, "Isn't this all just a fad?" You know, sort of the flagpole-sitting of the early early 21st century? It's fun to go out and produce and share a little bit, but then people are going to eventually realize, "This isn't as good as doing what I was doing before," and settle down. And I made a spirited argument that no, this wasn't the case, that this was in fact a big one-time shift, more analogous to the industrial revolution than to flagpole-sitting.

I was arguing that this isn't the sort of thing society grows out of. It's the sort of thing that society grows into. But I'm not sure she believed me, in part because she didn't want to believe me, but also in part because I didn't have the right story yet. And now I do.

I was having dinner with a group of friends about a month ago, and one of them was talking about sitting with his four-year-old daughter watching a DVD. And in the middle of the movie, apropos nothing, she jumps up off the couch and runs around behind the screen. That seems like a cute moment. Maybe she's going back there to see if Dora is really back there or whatever. But that wasn't what she was doing. She started rooting around in the cables. And her dad said, "What you doing?" And she stuck her head out from behind the screen and said, "Looking for the mouse."

Here's something four-year-olds know: A screen that ships without a mouse ships broken. Here's something four-year-olds know: Media that's targeted at you but doesn't include you may not be worth sitting still for. Those are things that make me believe that this is a one-way change. Because four year olds, the people who are soaking most deeply in the current environment, who won't have to go through the trauma that I have to go through of trying to unlearn a childhood spent watching Gilligan's Island, they just assume that media includes consuming, producing and sharing.

It's also become my motto, when people ask me what we're doing—and when I say "we" I mean the larger society trying to figure out how to deploy this cognitive surplus, but I also mean we, especially, the people in this room, the people who are working hammer and tongs at figuring out the next good idea. From now on, that's what I'm going to tell them: We're looking for the mouse.

We're going to look at every place that a reader or a listener or a viewer or a user has been locked out, has been served up passive or a fixed or a canned experience, and ask ourselves, "If we carve out a little bit of the cognitive surplus and deploy it here, could we make a good thing happen?" And I'm betting the answer is yes.


ON "GIN, TELEVISION, AND COGNITIVE SURPLUS" By Clay Shirky

Nicholas Carr, Chris Anderson, James O'Donnell

[...continue]

John Brockman, Editor and Publisher
Russell Weinberger, Associate Publisher

contact: [email protected]
Copyright © 2008 By
Edge Foundation, Inc
All Rights Reserved.

|Top|