Edge 251July 21, 2008
(7,170 words)


By Nathan Myhrvold


This Is Your Brain On The Internet
Clay Shirky, Nicholas Carr, Larry Sanger, Matthew Battles, Clay Shirky

W. Daniel Hillis



Costly Toys, Or A New Era For Drivers
By Joe Nocera

Let's Get Rid of Darwinism
By Olivia Judson

Search and Destroy

Amazon Tribe Has No Words for Different Numbers
By Roger Highfield

The Return of Religion
By Roger Scruton

Imagine telling Ansel wait for new algorithms, so your pictures can improve. It's a very different world today.


This feature contains some panoramic shots that are created by stitching together multiple frames into one picture. These were mostly taken during my recent trip to Iceland and Greenland.



The previous messages about my Iceland/Greenland trip were about conventional pictures. (See below). This feature contains some panoramic shots that are created by stitching together multiple frames into one picture. These were mostly taken during my recent trip to Iceland and Greenland. Click on the images to enlarge.

Click on images to enlarge

A very frequent question that I got from the previous emails is "what equipment do you use?", or "you must have a great camera". Whenever I hear that I smile and think yes I do have great equipment, but there is a bit more to it than that. A small amount of the "more to it" is me, the rest is a complex mixture of software and hardware. If all you care about is looking at the pictures, you can skip the rest of the text! But, if you are interested in the technology behind the picture making, read on. (For a printable text-only file, click here.)

For most pictures I use a Canon EOS 1Ds Mark III camera—this has 21 megapixels, so it is pretty serious, but it's not the biggest out there. Digital backs for medium format cameras have up to 45 megapixels. They are also about 4X more expensive for 2X the number of pixels, but that underestimates the total difference. They are much more cumbersome—both heavier (not that my camera is light!) and bulkier. There are fewer lenses available. The battery life isn't great, and neither is the ISO range. These cameras are really made to be used in a studio, where you can give them plenty of light, tether them to a computer, use an AC power supply and so forth. Under ideal circumstances the quality is definitely be higher than my camera, but those ideal cases are rare. Net-net the tradeoffs argue for the Canon, at least for me.

In addition to having lots of pixels, each pixel is big in terms of bits. On my camera each pixel has 42 bits (14 bits each for Red, Green, Blue), whereas most low end digital cameras have 24 bits/pixel. The extra bits give much smoother gradations of color, and reduce digital "noise" that can occur in shadow areas, especially at high ISO (when there is not much light).

Noise in digital photography is a bit like grain in film—but has some differences. Digital noise comes from errors in the camera reading the image sensor. Each pixel on the sensor is a tiny bucket—in the case of my camera they about 5 microns across (5 millionths of a meter). Each bucket catches photons (particles of light) and turns them into electrons, which are then counted to tell the camera how much light hit that pixel. Noise happens because there are some errors in both the collecting and counting process. Bright parts of the picture have high numerical values—in a standard 24 bit/pixel image, each component R, G, B has 8 bits. So a pure white pixel is 255,255,255 and pure black is 0, 0, 0. Noise is like a small number which is added or subtracted from each pixel. At low ISO the variation is small, say +- 2. At high ISO it is like adding +-10 or more.

Noise tends to be a constant amount rather than a percentage. You can see how this will affect the dark regions of the photo more. +- 10 is a big deal in a dark area where the numbers are 10, 10, 10. It is percentage wise much less important to add +- 10 to 200, 200, 200 than to 10, 10, 10. This is also why it is better to have 42 bits per component than 24—in the 42 bit case white is 16384, 16384, 16384, and the noise matters much less +- 10 is less noticeable. These differences become even more pronounced when you adjust the tones in Photoshop.

The camera is matched with a set of Canon lenses. Most of the pictures in Iceland and Greenland were taken with a 24-70mm f/2.8 zoom lens, or a 300 mm f/2.8 telephoto, although I had other lenses down to 16mm and up to 600mm. The lenses are the most expensive part of the equipment—many times more in aggregate than the camera bodies. In the digital era, the lenses are also the permanent part. Three years from now I will almost certainly have a new model digital camera, with lots of features (just as I will likely be typing on a new computer by then), however I will almost certainly be using the same lenses.

These days software is just as important as hardware—indeed more so if you ask me. The raw images from the camera must be converted from 42 bits/pixel to 48 bits/pixel using "raw conversion" software. I use either Adobe Camera Raw (bundled with Photoshop), or when the image is really important or difficult, DXO, a very sophisticated raw converter from France. Raw conversion is to modern digital photography what developing was in the analog era. The reason it exists is that mathematical algorithms for signal processing need to be run over the raw sensor data. The camera does this, but the amount of processing power and memory in the camera is limited. A PC has a processor that is vastly more powerful than the chips in the camera, and that extra power can be spent running more sophisticated algorithms to improve the picture quality. Good raw conversion can take five minutes per picture on a high end multiprocessor PC—that is far more time than you'd tolerate inside the camera.

After raw conversion comes adjustment in Photoshop. Lots of people have asked me "do you use Photoshop?" and when I say that I do there is often a comment like "oh, that explains why the pictures look good" as if it was easy to use Photoshop to make any picture look good. Just try it!

Every serious digital photographer uses Photoshop routinely, but typically only for small tweaks. The first Photoshop task is dust spot removal. Remember those electrons sitting in tiny buckets on the sensor? Well they give the sensor a charge—it is static electricity. So for the same reason that static electricity can attract lint and dust to your clothes, they can attract dust to the sensor where it makes black spots on the picture. This particularly occurs when you change lenses. Indeed in dusty places like Africa I try to never change lenses while out in the field.

Dust is such a problem that my current camera has an automatic sensor cleaning feature—the entire sensor is shaken at ultrasonic frequencies to throw the dust off. This occurs every time you change lenses. It isn't perfect, but it does help with dust. I usually try to shoot a few frames of plain sky every day or so, so I can look for dust spots. If the automatic method fails, the best way to remove them is an amazing brush from Canada called the Arctic Butterfly. Just prior to use, you press a button and a motor spins the brush through the air at high speed. This gives it a static electric charge which then out-competes the sensor cling. But you don't want to do this too often—if you accidentally scratch the sensor, you've just destroyed the most expensive part of a $9000 camera. It's also a part you can't easily replace in a small fishing village in Greenland. So I try to be very careful about sensor cleaning.


When all else fails and dust spots are on your picture, you a Photoshop feature called "healing brush" gets rid of them. You center a circular cursor over the dust spot and click, and it just disappears. After dust spotting is color adjustment. Human eyes have an amazing system of automatically adjusting to both the brightness and color of a scene. That is why ordinary incandescent light bulbs work—the light is actually quite orange if you measure it, yet our eyes are happy to adapt and see the light as white. Not so with camera sensors, which tend to see the world quite objectively. Every digital picture must be adjusted for the "white balance"—the definition of what is supposed to look white. Digital cameras have some automatic ways to do this, but they are not perfect, so it almost always needs a little tweak.

Dynamic range is the term for the difference between the lightest part of the picture, and the darkest. A typical outdoor scene can have a ratio of light to dark of 4000:1—the bright parts are 4000X brighter than the dark parts, but scenes can easily go higher than that, up to 1,000,000: 1 if you have a dark shadow in one corner and the sun itself in another. Human eyes have an automatic system that compensates for this, allowing us to see both the bright highlights and the dark shadows simultaneously. Not so with cameras. Analog color film typically has a dynamic range of 32:1, or 5 f-stops (one f-stop difference is a factor of 2 in illumination). Digital cameras are a little bit better at about 6 to maybe 7 f-stops, or 64:1 to 128:1, or maybe a little more. A lot of the problem of picture taking is how to stuff 4000:1 dynamic range into the much smaller range of the sensor. This is why photos taken in bright sun can either have pure white, washed out bright areas (if you expose for the shadows), or good highlights, but inky black shadows if you expose in the other direction. There is no way to get both the bright and dark parts to look like they do to your eyes.

Just to make life difficult, natural scenes can also have very flat illumination, with low ratios of light to dark, that are much less than the film or sensor can record. In that case the task is how to breathe life into flat dull lighting. So, sometimes you are trying to stuff 12 f-stops of range into a 6-stop sensor, while for other pictures you're boosting a dull 3-stop scene into larger range. The fundamental reason for all of this is that our eyes do such a good job of automatic adjustment, and cameras don't. One day they will, but I'm taking pictures now.

After taking the photo, we then make a print, or an image on a computer screen. Those output media also have a dynamic range—which is even more limited. A typical matte print has a dynamic range of 50:1. A glossy print can have up to 256:1, but only for some angles, otherwise reflections overwhelm the dark parts. The low dynamic range of photo print is pretty obvious—otherwise you'd get sunburned by a photo of the sun! The amazing thing about photography is that despite these limitations, you can create a psychological impression of a bright desert sun even though the camera, and the output media have vastly smaller optical dynamic range than the original. Photographers create compelling images by matching our intuitive notion of what scenes ought to look like; bridging the gap between what our visual system remembers and the far more objective sensor and output media record.

There are many photographer's tricks in coping with this. One is to choose your lighting to avoid the problem, but this only goes so far. Another is to use flash, or reflectors to add light to the dark parts or subtract from the bright parts. That is what a studio photographer does. The primary tool for dealing with dynamic range is careful adjustment of the contrast and brightness of the image. Ansel Adams was a pioneer of using careful development of analog film, and good print making to do this—he called it the "zone system". I used the zone system as a kid—it was very complicated, but it allowed you to decide at the time you took the picture which parts you'd make light and dark. These days one achieves similar results with a Photoshop feature called Curves, which directly controls the mapping between what the sensor sees and the final picture. Curves does this globally for the picture, which is nice and simple, and usually works very well. About 90% of my pictures are done at this stage—dust spotting, color adjustment and curves are all they need.

Ansel would be thrilled with this—he was a photo geek that was totally into the technology of his day—but he would almost certainly do more. The next stage after Curves is to make local adjustments. In the old analog darkroom this was called burning and dodging. You would stand in the dark under the enlarger and give extra light to some parts of the picture (making them darker), and block light from other areas. It took some patience to learn this, but master printmakers could do some amazing things. Today Photoshop has a dozen ways to do the equivalent of burning and dodging, with the great advantage that you don't do it in the dark, and even better you always have an Undo command handy. I rarely dodge and burn on my photos, but sometimes it works very well.

This is the basics of imaging software, but there are lots of details. Hundreds of small software companies make packages that assist with various tasks. Downloader Pro, from the UK helps me download my pictures from memory cards. Focus Magic, from New Zealand helps with sharpening detail, as does Focal Blade from Germany. Neat Image from Russia helps filter noise. Fred Miranda, a male model in Brazil (I am not making this up!) writes Photoshop actions that he sells over the Internet for automating common tasks. One of the many benefits of the Internet is that one can discover and purchase software from all over the world. Some of this software is stand-alone, some is plug-ins for Photoshop. Only rarely is this software open source or freeware; the majority is for-profit paid software from small companies, or individuals. Sometimes it is a hobby business or sideline, but a lot of these people make a living writing and selling this imaging software over the Internet. One can argue whether it takes a village to make a child, as Hillary famously said. It definitely takes software from a whole world full of geeks to make a state of the art photo.

All of the above is the standard for normal images, but there are a set of much more exotic things that can be done using multiple pictures. The basic idea here is very simple—overcome the limitations of a single photo by taking multiple pictures and somehow getting the best of them. This works well, but it gets complicated in a hurry.

One way to solve the dynamic range problem is to take multiple pictures at different exposures. So, if you have 12 f-stops of dynamic range, put your camera on a tripod and take several pictures (as few as 3, but I usually do 7) at different exposure settings. Some of these will have the shadow detail, but the highlights will be washed out white. Other shots will have the highlights properly exposed but the shadows will be pure black. You then use software to combine the multiple imagines into a High Dynamic Range image. HDR images have 32 bits per component, or 96 bits per pixel, so they are very large. Each component is a floating point number, so it can store enormous gigantic dynamic range—more than exists on Earth. I have been experimenting with HDR for years, but more recently it came into the mainstream because Photoshop started to support it—prior to this you had to write your own software. These days it is far more common.

There are two challenges for HDR—the first is taking the pictures. Since you are taking multiple shots of each scene nothing can change; this is not a technique for moving subjects. The second challenge is how to go from the HDR representation to a print—this is a process called tone mapping. The human eye has a very sophisticated algorithm for tone mapping—unfortunately we don't have anything quite that good yet. So it is very easy to use tone mapping to create a picture that looks weird and unnatural. I don't use HDR very much, and when I do, I try to be very subtle about it—but there are a couple images included here that use it, one of them is not quite as subtle as I would like. I'll let you guess as to which photos enclosed here are HDR. Eventually better algorithms will let me re-make these HDR photos as better versions that exist here.

Imagine telling Ansel wait for new algorithms, so your pictures can improve. It's a very different world today.

The next multiple shot technique is noise reduction, and resolution enhancement. A program called PhotoAcute can take multiple pictures of the same scene (again, a tripod is needed) and put them together to reduce noise and increase resolution. Elementary statistics tells you that random noise ought to drop as the square root of the number of images, so 4 shots put together gives you half the noise. That is just simple averaging, but there are much more complicated algorithms that PhotoAcute uses to extract more resolution—up to about four times as many equivalent pixels as much with enough shots (typically 6 to 9). Since I am already starting with 21 megapixels, I don't need this often. I usually make prints up to 28 x 42 inches without any noise or grain from a 21 megapixel image. PhotoAcute is mainly interesting for going bigger than this.

Yet another multiple shot approach involves focus. If one part of the image is in focus, other parts may not be. The range of distance which is in focus is called the depth of field. It is often distressing to find that your depth of field is smaller than you would like. One approach is to "stop down" the lens, going down to f/16 or f/22, but this hurts the resolution of the photo (due to diffraction, but that is another story...). Shallow depth of field is particularly problem for telephoto shots, or extreme close up or macro shots, where even f/22 is not enough. Once again there is a multiple shot technique—you take multiple photos at different focal distances and then software called Helicon Focus (written by a guy in Ukraine) that puts them together semi-automatically into one shot that is perfectly in focus.

I use this technique a lot for microscope and macro shots, but sometimes do it for landscapes. None of the pictures from Iceland or Greenland use focus stacking, so to illustrate the technique I enclosed a macro shot of some blueberries, and a microscope shot of veins in a leaf. These pictures can often combine more than a hundred individual shots. Without focus stacking only a tiny amount of each picture would be in focus. I cobbled together a software controlled robotic stand to do this, and similarly automated my microscope, writing software to control them—this is beyond the fringe of what is commercially available.

The final multiple shot technique is panoramic photography. A conventional camera (analog or digital) has a ratio of width to height (called the aspect ratio) of 3:2, which is equivalent to 1.5:1. Old style TV screens and computer screens are 4:3 (or 1.33:1), and the newer HDTV screens are 16:9, which are a little bit less than 2:1, while the widest screens in a movie theater are typically a bit wider at 2.35: 1. Panoramic photography comes when you get an aspect ratio of 3:1—the picture is three times wider than it is tall, and they go up from there. Psychologically pictures just seem to look different when the aspect ratio is 3:1 or wider because it is harder to see the picture in one glance. That might sound like it is unfriendly to the user, but in fact this is exactly what reality does—our field of view is very wide, and our eyes dart around it to take it in. One of the reasons for wide panoramas is that we have two eyes, set side by side—if our eyes were over/under instead the field of view would be quite different.

One way to get a panoramic picture is to take a regular picture then cut off (crop) the top and bottom. You can do that, but you wind up losing resolution—you can't make a very big picture that way. In the old analog days people made cameras with special lenses that exposed very wide pieces of film. I have lots of these. The smallest is a Hasselblad X-pan that was basically a stretch-limo version of a 35mm rangefinder (like a Leica). It shot super wide pictures on a 35mm film (21 wide shots on what would normally be a 36 exposure roll). My largest is a special Korona view camera (the old fashioned kind with leather bellows) that takes special sheet film that is 8 inches x 20 inches. These days I tend to take panoramas digitally by stitching—the digital technology is just so superior to film.

To do this, you generally start with a tripod with a special head which lets you index the photos accurately. You then take a pile of pictures, each with the camera pointed in a slightly different direction. The pictures are coventional 35mm aspect ratio, but are set vertically (i.e. "portrait" rather than "landscape") because that puts the most pixels vertically. Once again software comes into play—I use a high end stitching program called RealViz Sticher Pro (another French software product), but there are many others. Stitching software takes the multiple pictures and puts them together into one super large image. I frequently will combine 10 or 12 pictures to create a single image with 200 to 400 megapixels. The process takes a long time—hours for a very large shot—because it is very computationally intensive.

Like the other techniques, stitched panoramas have an issue with movement—if the subject moves while you are taking the multiple photos you have a problem. In the Falkland Islands last year an enterprising penguin moved fast enough between shots that he appears twice in one picture. You also need to be careful about wide angle distortion. Most of my panoramas are shot with lens that is slightly telephoto —usually 70 mm focal length—but the total effect of putting them together creates the same sort of wide angle effect you would get from a super wide lens. In other shots I use a wide angle lens down to 24 mm, creating a super-wide panorama that is the equivalent of a fisheye lens or better, taking in more than 180 degrees of the scene. The wider the field of view, the more distortion—this is just a reality of geometry. Mathematically this is similar to the problem of a map projection—just as a Mercator projection map distorts some areas more than others when you go from a round earth to a flat map, very wide panoramic shots get some curvilinear distortion. And, as with maps, there are different ways to deal with it. Sometimes this looks cool and is intrinsic to what the picture is about, sometimes it's a distraction.

I was amused to see a recent article in the NY Times about a robotic panorama head that is due on the market in a year or so which helps automate the tedious process of taking all of those pictures. That is one approach. At the other extreme, I have gotten pretty good at shooting panoramas freehand. You have to be very careful to keep the camera very level, and to have enough overlap between the shots that the software can figure out how to put the images together. Several of the panoramas in this email were shot this way, including one shot freehand while standing on a moving boat.

Finally, just to make this extra complicated, you can do all of these things simultaneously. A number of the shots here are HDR panoramas—in one case I used 12 different directions, taking 7 shots in each direction. So, 84 separate images were combined to make one final picture. HDR is pretty important to panoramas because across a very wide field of view it is quite likely that you'll get a high dynamic range. If I wanted to, I could use Helicon Focus and shoot 10 different focal distances for each of these and make it even more extreme.

As you can imagine, this gets very time consuming. That's why I am sending these shots out weeks after I got back from the trip. I also have to confess to having some help—Ryan Smith spent a lot of time wrangling the software on my behalf because my day job keeps getting in the way of my photography.

Once you have a panoramic picture the challenge then becomes how to print it. I've been planning for months to make a set of panoramic prints, but they are some real issues. I bought a great new HP photo printer that uses 44 inch wide paper. The paper comes in long rolls, and one of the main reasons I bought the printer was to make panoramic prints that are 44" high by up to 20 feet long. Logistically it is hard to make, dry and mount prints that size. It took a while to find a framer that could custom make frames that big—20 feet just isn't a standard. Then the walls needed to be reinforced to mount the frames. It is very easy to damage a large print in mounting it, so we decided to make the prints "just in time"—we would get the frame made, and mounted. Then when everything was ready we would make the print at the last minute, dry it overnight, then have an expert art conservator hinge it into the frame. We got all ready, and Ryan started to make the first print—16 feet long.

It didn't work. We discovered that the HP device driver for the printer has a serious bug for prints more than 108 inches long. It's my karmic burden—live by software, die by software. That was three months ago. While we struggle with workarounds, and talk fruitlessly with HP tech support, gigantic empty frames grace the hallways of my company—it's all rather comical. Passersby must think that they are some sort of minimalist art.

Anyway, this is the last photographic installment for a while — my day job and other responsibilities beckon and it will be a while before I have a chance to take more photos.


Photos copyright © 2008 by Nathan Myhrvold. All rights reserved.

By Nicholas Carr

W. Daniel Hillis, Kevin Kelly, Larry Sanger, George Dyson, Jaron Lanier, Douglas Rushkoff, NEW W. Daniel Hillis

We evolved in a world where our survival depended on an intimate knowledge of our surroundings. This is still true, but our surroundings have grown. We are now trying to comprehend the global village with minds that were designed to handle a patch of savanna and a close circle of friends. Our problem is not so much that we are stupider, but rather that the world is demanding that we become smarter. Forced to be broad, we sacrifice depth. We skim, we summarize, we skip the fine print and, all too often, we miss the fine point. We know we are drowning, but we do what we can to stay afloat. W. Daniel Hillis

[The July/August issue of Atlantic Monthly features a cover story by Nicholas Carr: "Is Google Making Us Stupid: What The Internet is doing to Our Brains". Carr is author of the recently published The Big Switch: Rewiring the world, from Edison to Google and a blogger: Rough Type. He is also an Edge contributor.

Danny Hillis disagrees with his argument. Here is Hillis's comment was an interesting Edge Reality Club discussion, cross-referenced with a discussion on the Encyclopedia Britannica website. —JB]

NEW W. DANIEL HILLIS: For those of you who have not been following the action on the Encyclopedia Britannica blog, here is the latest: Clay Shirky dissed Tolstoy and Nicholas Carr zinged back with a smackdown about Clay's "highbrow form of philistinism". Ouch.

Clay Shirky is not just questioning Tolstoy, he is questioning the culture of literature. He asks, What's so great about War and Peace? Maybe it does have themes of power, fate, and personal responsibility, but it is really any more enriching than, say, a season of The Wire? And Shirky is not alone in his blasphemy. Back on the Edge, George Dyson is speculates, "Perhaps books will end up back where they started, locked away in monasteries (or the depths of Google) and read by a select few". For a readership of bibliophiles, this is treason. ...

Britannica Forum:

This Is Your Brain; This is Your Brain on the Internet

Clay Shirky, Nicholas Carr, Larry Sanger, Matthew Battles, Clay Shirky

Why Abundance is Good: A Reply to Nick Carr
Clay Shirky

But the anxiety at the heart of "Is Google Making Us Stupid?" doesn't actually seem to be about thinking, or even reading, but culture. ...

... As Carr notes, "we may well be reading more today than we did in the 1970s or 1980s, when television was our medium of choice." Well, yes. But because the return of reading has not brought about the return of the cultural icons we'd been emptily praising all these years, the enormity of the historical shift away from literary culture is now becoming clear.

And this, I think, is the real anxiety behind the essay: having lost its actual centrality some time ago, the literary world is now losing its normative hold on culture as well. The threat isn't that people will stop reading War and Peace. That day is long since past. The threat is that people will stop genuflecting to the idea of reading War and Peace. [...MORE]

Why Skepticism is Good: My Reply to Clay Shirky
Nicholas Carr

It's telling that Shirky uses gauzily religious terms to describe the Internet—"our garden of ethereal delights"—as what he's expressing here is not reason but faith. I hope he's right, but I think that skepticism is always the proper response to techno-utopianism. [...MORE]

A Defense of Tolstoy & the Individual Thinker: A Reply to Clay Shirky
Larry Sanger

In Clay's view, it seems, the new speed and deeply social nature of intellectual discourse means that, soon, the only relevant discourse will occur in blog- or Twitter-sized chunks. Is this the hip "upstart literature," proudly "diverse, contemporary, and vulgar," that is now "the new high culture"?

If so, God help us. [...MORE]

Yes, the Internet Will Change Us (But We Can Handle It)
Matthew Battles

Nick Carr's Atlantic essay has also prompted a discussion over at publisher John Brockman's blog The Edge. Brockman's authors include computer science visionaries, evolutionary biologists, and cognitive scientists, and Carr's concerns about the cognitive effects of the Internet are very much their cup of tea. [...MORE]

Why Abundance Should Breed Optimism: A Second Reply to Nick Carr
Clay Shirky

...Carr calls me an optimist, which is true. Here's why: Every past technology I know of that has increased the number of producers and consumers of written material, from the alphabet and papyrus to the telegraph and the paperback, has been good for humanity.

Carr argues that our period of abundance is different. The worries are numerous: the increased volume and availability of writing is leading not to wisdom but to triviality and distractions. The young are abandoning the classical in favor of the vulgar. Venerable institutions are under possibly crushing new pressures. These complaints are not just familiar, they are accurate. However, they also have an inevitable feel about them, having been made at the beginning of every such expansion, from the printing press to the comic book to the act of writing itself. ... [...MORE]


For those of you who have not been following the action on the Encyclopedia Britannica blog, here is the latest: Clay Shirky dissed Tolstoy and Nicholas Carr zinged back with a smackdown about Clay's "highbrow form of philistinism". Ouch.

Clay Shirky is not just questioning Tolstoy, he is questioning the culture of literature. He asks, What's so great about War and Peace? Maybe it does have themes of power, fate, and personal responsibility, but it is really any more enriching than, say, a season of The Wire? And Shirky is not alone in his blasphemy. Back on the Edge, George Dyson is speculates, "Perhaps books will end up back where they started, locked away in monasteries (or the depths of Google) and read by a select few". For a readership of bibliophiles, this is treason.

I will confess that I too am a booklover, and I write these words surrounded by the comfort of well-filled shelves. I love the smell of musty bindings, the texture of soft paper, and the crispness of a well-turned page. I like what books have to say and how they say it. My thoughts follow naturally in the patterns of literature, my mental stride falling in easily with its pace. Unlike Clay Shirky, I actually liked War and Peace.

Yet, as much as I love books, I understand that my bibliophilia is not a virtue, but an indulgence. I associate books with insight and knowledge, but my respect is for ideas, not format. Shirky is right to call out the cargo-cult of literature. For many years books were the primary means by which important ideas were conveyed to us, we came to associate them with thoughtful insight. This association is out of date. As much as I liked War and Peace, I probably got more out of the The Wire. And why should that be surprising? More human effort can be put into a television series than a novel and more time is spent consuming it. If both are executed to their highest standards, with equal care, skill and insight, we might well expect less from the book.

Even if literature is losing its primacy in storytelling, we might still hold out hope that the book as the best way to covey a complex idea. What if the format of a book is specially matched to the way we think? I doubt it. It may be sometimes true that the length and pace of book is perfectly fit to certain arguments, but when that happens, it is a happy coincidence. There is nothing about the amount of content that fits into a hand-held paper presentation that has any special importance to the human mind. Nor is it easy to argue that printed squiggles have some privileged channel to thought. Reading is an unnatural act, a trick that we have learned to move ideas across space and time.

A better argument might be that we have easier control over the pace and order of book than a video. That is true, but it points about the advantages of other media, such as linked interactive text, over books. Straight lines of thought and presentation can be a useful tool, but they are a constraint, not a unique advantage. Carr complains that his mind "now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles." This sounds pretty good to me. Seekers of knowledge will naturally gravitate toward the richest and most useful sources. They are gravitating away from books.

I put my love for books in the same category as my fondness for mechanical clocks. There is an art to them, and I am enriched for knowing what they hold inside. I am awed at what can be accomplished within such constraints. Yet, I would not argue that they have any special entitlement to greatness. Books, like mechanical clocks, were created to serve a purpose. Whether it is to inform, to entertain or to enlighten, the same purpose can often be served by better means.

Romance novels may have a future, but we are witnessing the sunset of the tome. I believe in George Dyson's vision of a tomorrow where books of knowledge are oddities, relegated to the obscure depths of monasteries and search engines. It makes me a little sad and nostalgic. But my sadness is tempered by the sure understanding that is neither the last nor the first change in format for our accumulation of wisdom. The book is a fine and admirable device, but I do not doubt that clay tables and scrolls of papyrus had charms of their own.


July 19, 2008


Joe Nocera

"In and of itself," said Elon Musk, "a $100,000 sports car is not going to change the world."

Mr. Musk is a 37-year-old technology entrepreneur who became extremely wealthy when eBay bought PayPal, which he had co-founded. A lanky South African, he is using that wealth to finance two quixotic efforts. The first is SpaceX, a company he hopes will one day make it possible to colonize Mars. (I kid you not.)

The second is Tesla Motors, which was started in 2003 — Mr. Musk became its chief backer and board chairman in early 2004. After raising $150 million and going through four years of technological and internal struggles, the company has begun manufacturing the first-ever all-electric sports car, the Tesla Roadster. Its base price is $109,000. And if Mr. Musk is willing to concede that the Roadster, by itself, isn"t a world-changer, he fervently believes that the technology Tesla has created — technology that gives the car a range of 227 miles per battery charge, and enough acceleration to go from zero to 60 in under 4 seconds — will indeed change the world. The age of the electric car, he is convinced, has dawned. ...



July 15, 2008


Olivia Judson

Charles Darwin was a giant. He did not merely write "On the Origin of Species" — one of the most important books ever written by anyone — in which he describes how evolution by natural selection works, and what some of its consequences and implications are. He also wrote — and this list is not exhaustive— a treatise on the formation of coral reefs that is still thought to be correct; a definitive monograph on barnacles, both extinct and extant; a book about how earthworms make soil; a now-classic text on carnivorous plants (the ones, like Venus fly-traps, that ensnare and digest insects); a book about the strange ways that orchids get themselves fertilized; and an account of the five years he spent aboard the ship HMS Beagle, which has become a classic of travel writing.

As if that wasn't enough, he proposed sexual selection — the idea that decorations and ornaments, like peacocks' tails, evolve because females in many species prefer to mate with the most beautiful males. Sexual selection has since become a major field of research in its own right.

In short, Darwin did more in one lifetime than most of us could hope to accomplish in two. But his giantism has had an odd and problematic consequence. It's a tendency for everyone to refer back to him. "Why Darwin was wrong about X"; "Was Darwin wrong about Y?"; "What Darwin didn't know about Z" — these are common headlines in newspapers and magazines, in both the biological and the general literature. Then there are the words: Darwinism (sometimes used with the prefix "neo"), Darwinist (ditto), Darwinian.

Why is this a problem? Because it's all grossly misleading. It suggests that Darwin was the beginning and the end, the alpha and omega, of evolutionary biology, and that the subject hasn't changed much in the 149 years since the publication of the "Origin."

He wasn't, and it has. Although several of his ideas — natural and sexual selection among them — remain cornerstones of modern evolutionary biology, the field as a whole has been transformed. If we were to go back in a time machine and fetch him to the present day, he'd find much of evolutionary biology unintelligible — at least until he'd had time to study genetics, statistics and computer science. ...



July 18, 2008

Search and Destroy

The ability to search through massive amounts of data, Google-style, is having far-reaching effects. And, according to Wired Magazine's Chris Anderson, one of the most significant casualties may be the venerable scientific method. He explains why in the age of the petabyte, scientific testing is forever changed and why the numbers now speak for themselves.

[Transcripts will be available 7.21.]


Further Reading: The End Of Theory: Will the Data Deluge Makes the Scientific Method Obsolete? By Chris Anderson [6.30.08]


July 17, 2008

By Roger Highfield, Science Editor

The idea that people have an innate mathematical ability has been questioned by a study of an Amazonian tribe that has no sense of number.

• Modern texters scooped 3,000 years ago;-)
• Scientist says numeracy theories don't add up
• Amazon tribesmen pass geometry test

The ability of tribal adults of the Pirahã to conceptualise numbers is no better than that of infants or even some animals and their language, with only 300 speakers, has no word even to express the concept of "one" or any other specific number.

The team, led by Massachusetts Institute of Technology professor of brain and cognitive sciences Edward Gibson, found that members of the Pirahã tribe in remote northwestern Brazil use language to express relative quantities such as "some" and "more," but not precise numbers.

It is often assumed that counting is an innate part of human cognition, said Prof Gibson, "but here is a group that does not count. They could learn, but it's not useful in their culture, so they've never picked it up."

The study, which appeared in the journal Cognition, offers evidence that number words are a concept invented by human cultures as they are needed, and not an inherent part of language, said Prof Gibson, who did the study with Michael Frank, Dr Evelina Fedorenko, and Prof Daniel Everett, of Illinois State University.

The work builds on a study published in 2005 by Prof Everett, who lived with the tribe for much of his life between 1977 and 2007, which found that the Pirahã had words to express the quantities "one," "two," and "many." ...


See: Recursion And Human Thought Why the Pirahã Don't Have Numbers, A Talk With Daniel L. Everett [6.14.07]


Nr 1, 2008

Roger Scruton

Faced with the spectacle of the cruelties perpetrated in the name of faith, Voltaire famously cried 'Ecrasez l"infâme!'. Scores of enlightened thinkers followed him, declaring organised religion to be the enemy of mankind, the force that divides the believer from the infidel and which thereby both excites and authorises murder. Richard Dawkins is the most influential living example of this tradition, and his message, echoed by Dan Dennett, Sam Harris and Christopher Hitchens, sounds as loud and strident in the media today as the message of Luther in the reformed churches of Germany. The violence of the diatribes uttered by these evangelical atheists is indeed remarkable. After all, the Enlightenment happened three centuries ago; the arguments of Hume, Kant and Voltaire have been absorbed by every educated person. What more is to be said? And if you must say it, why say it so stridently? Surely, those who oppose religion in the name of gentleness have a duty to be gentle, even with – especially with – their foes?...


Paperback - US $10.17, 336 pp Harper Perennial   Hardcover - UK £9.09 352 pp Free Press, UK

What Are You Optimistic About?: Today's Leading Thinkers on Why Things Are Good and Getting Better Edited by John Brockman Introduction by Daniel C. Dennett

"The optimistic visions seem not just wonderful but plausible." Wall Street Journal "Persuasively upbeat." O, The Oprah Magazine "Our greatest minds provide nutshell insights on how science will help forge a better world ahead." Seed "Uplifting...an enthralling book." The Mail on Sunday

aperback - US $11.16, 336 pp Harper Perennial   Paperback - UK £6.99, 352 pp Free Press, UK

What Is Your Dangerous Idea?: Today's Leading Thinkers on the Unthinkable Edited by John Brockman Introduction by Steven Pinker Afterword by Richard Dawkins

"Danger – brilliant minds at work...A brilliant bok: exhilarating, hilarious, and chilling." The Evening Standard (London) "A selection of the most explosive ideas of our age." Sunday Herald "Provocative" The Independent "Challenging notions put forward by some of the world's sharpest minds" Sunday Times "A titillating compilation" The Guardian "Reads like an intriguing dinner party conversation among great minds in science" Discover

Paperback - US $11.16, 272 pp Harper Perennial   Paperback - UK £5.39 288 pp Pocket Books

What We Believe but Cannot Prove: Today's Leading Thinkers on Science in the Age of Certainty Edited by John Brockman Introduction by Ian McEwan

"An unprecedented roster of brilliant minds, the sum of which is nothing short of an oracle — a book ro be dog-eared and debated." Seed "Scientific pipedreams at their very best." The Guardian "Makes for some astounding reading." Boston Globe Fantastically stimulating...It's like the crack cocaine of the thinking world.... Once you start, you can't stop thinking about that question." BBC Radio 4 "Intellectual and creative magnificence" The Skeptical Inquirer

Edge Foundation, Inc. is a nonprofit private operating foundation under Section 501(c)(3) of the Internal Revenue Code.

John Brockman, Editor and Publisher
Russell Weinberger, Associate Publisher

contact: [email protected]
Copyright © 2008 By Edge Foundation, Inc
All Rights Reserved.