Question Center

Edge 292—July 1, 2009
(9,550 words)

A Conversation with Gavin Schmidt


The Big Accomodationism Debate

Carr, Coyne, Smolin, Anderson, Miller. Myers, Dawkins, Carroll

Follow Edge On


We have decided, as a scientific endeavor, to extrapolate as much as we can from our knowledge of the individual processes that we can measure: evaporation from the ocean, the formation of a cloud, rainfall coming from a cloud, changes in wind patterns as a function of the pressure field, changes in the jet stream. What we have tried to do is encapsulate those small-scale processes, put them together, and see if we can predict the emerging properties of that fundamental complex system.

A Conversation with Gavin Schmidt

gavin schmidt

Edge Video


There is a simple way to produce a perfect model of our climate that will predict the weather with 100% accuracy. First, start with a universe that is exactly like ours; then wait 13 billion years.

But if you want something useful right now, if you want to construct a means of taking the knowledge that we have and use it to predict future climate, you build computer simulations. Your models are messy, complicated, in constant need of fine tuning, exacting and inexact at the same time. You're using the past to predict the future, extrapolating the very complicated from the very simple, and relying on an ever-changing data stream to inform the outcome.

Climatologist Gavin Schmidt explains:

Societies make decisions based on certain expectations for their climates. How far away from the shore should we build? How do we design our agriculture? What kind of air conditioning system do we put in a building? All of these decisions depend on the expectations we have for the summer temperature, or the intensity of the ocean surge during a Northeasterly storm. Our expectations in such matters have been built over hundreds of years of experience. But things have changed.

We have decided, as a scientific endeavor, to extrapolate as much as we can from our knowledge of the individual processes that we can measure: evaporation from the ocean, the formation of a cloud, rainfall coming from a cloud, changes in wind patterns as a function of the pressure field, changes in the jet stream. What we have tried to do is encapsulate those small-scale processes, put them together, and see if we can predict the emerging properties of that fundamental complex system.

— Russell Weinberger

GAVIN SCHMIDT is a climatologist with NASA's Goddard Institute for Space Studies in New York, where he models past, present, and future climate. His essay "Why Hasn't Specialization Led To The Balkanization Of Science?" is included in What's Next? Dispatches on the Future of Science, edited By Max Brockman.

Gavin Schmidt's Edge Bio Page



[GAVIN SCHMIDT:] The key environmental question facing us now (and for at least the next century) is to what extent are the changes we are making to the atmosphere, to the oceans, to the composition of the air, going to impact sea levels, temperature, and rainfall and hydrological resources?

Societies make decisions based on certain expectations for their climates. How far away from the shore should we build? How do we design our agriculture? What kind of air conditioning system do we put in a building? All of these decisions depend on the expectations we have for the summer temperature, or the intensity of the ocean surge during a Northeasterly storm. Our expectations in such matters have been built over hundreds of years of experience. But things have changed.

How do we come up with new expectations when our models are no longer valid? We need new information on which to base decisions being made now that will affect how we deal with the climate in 10, 20, 30, 50 years time because we are now building infrastructure for with these kinds of timeframes.

We have to ask questions about what expectations we may have for the future based on the physics that we presently know, on processes we can measure, and that resonate with our ability to understand the current climate. Among the questions we need to ask are: Why are there seasonal cycles? Why are there storms? What controls the frequency of such events over a winter, over a longer period? What controls the frequency of, say, El Nino events in the tropical Pacific that impacts on rainfall in California or in Peru or in Indonesia?

We have decided, as a scientific endeavor, to extrapolate as much as we can from our knowledge of the individual processes that we can measure: evaporation from the ocean, the formation of a cloud, rainfall coming from a cloud, changes in wind patterns as a function of the pressure field, changes in the jet stream. What we have tried to do is encapsulate those small-scale processes, put them together, and see if we can predict the emerging properties of that fundamental complex system.

This is very ambitious because there is a lot of complexity, a lot of structure in the climate that is not a priori predictable from any small scale process. The wet and dry seasons in the tropics come about because of the combination of the seasonal cycle of the orbit around the earth, changes in evaporation, changes in moist convection (the process that creates the big cumulus towers and thunderstorms), water vapor transports because of the moist convection, because of the Hadley Cell that gets set up as a function of all those things. It is a very complex environment.

We have been quite successful at building these models on the basis of small-scale processes to produce large-scale simulation of the emerging properties of the climate system. We understand why we have a seasonal cycle; we understand why we have storms in the mid-latitudes; we understand what controls the ebb and flow of the seasonal sea ice distribution in the Arctic. We have good estimates in this regard.

But we don't have perfect estimates. Instead, there are about twenty groups around the world that have reported on such processes, which ones are important, which ones are not. Each group has produced a separate digital world, a digital climate, and they each differ as a whole and in terms of their levels of sensitivity. For example, if I change an element in one of the models, each of the other models reacts in a slightly different manner.

In some respects they all act in very similar ways — for instance, when you put in more carbon dioxide, which is a greenhouse gas, it increases the opacity of the atmosphere and it warms up the surface. That is a universal feature of these models and it is universal because it is based on very fundamental physics that you don't need a climate model to work out. But when it comes to aspects that are slightly more relevant — I mean, nobody lives in the global mean atmosphere, nobody makes the global mean temperature an important part of his expectations — things change. When it comes to something like rainfall in the American Southwest or rainfall in the Sahel or the monsoon system in India, it turns out that the assumptions we make in building the models (the slightly different decisions about what is important and what isn't important) have an important effect on the sensitivity of very complex elements of the climate.

Some models strongly suggest that the American Southwest will dry in a warming world; some models suggest that the Sahel will dry in a warming world. But other models suggest the opposite. Let's imagine that the models have an equal pedigree in terms of the scientists who have worked on them and in terms of the papers that have been published — it's not quite the case but it's a good working assumption. With these two models, you have two estimates — one says the area will get wetter and one says it will get drier. What do you do? Is there anything you can say at all? It is a really difficult question.

There are a couple of other issues that come up. It turns out that the average of these twenty models is a better model than any one of the twenty models. It better predicts the seasonal cycle of rainfall; it better predicts surface air temperatures; it better predicts cloudiness. This is odd because these aren't random models. You can't rely on the central limit theorem to demonstrate that their average must be the best predictor, because these are not twenty random samples of all possible climate models.

Rather, they have been tuned and they have been calibrated and they have been worked on for many years in trying to get the right answer. In the same way that you can't make an average arithmetic be more accurate than the correct arithmetic, it is not obvious that the average climate model should be better than all of the other climate models. For example, if I wanted to know what 2+2 was and I picked a set of random numbers, averaging all those random numbers is unlikely to give me four.

Yet in the case of climate models, this is kind of what you get. You take all the climate models, which give you numbers between three and five, and you get a result that is very close to four. Obviously, it's not pure mathematics. It's physics, it's approximations, it involves empirical estimates. But it's very odd that the average of all the models is better than any one individual model.

Does that mean that the average of all the models' predictions is better than the prediction of any individual model? This doesn't follow, because it may be that all of the models contain errors, which for today's climate average out when you bring them together. Who is to say what controls their sensitivity since we know that in each model the sensitivity is being controlled by slightly different elements?

You need to have some kind of evaluation. I don't like to use the word validation because it implies a kind of binary true-false set up. But you need tests of the model's sensitivity compared to something in the real world that can confirm that model has the right sensitivity. This is very difficult.

For instance, let's imagine that the models I want to pay attention to are the ones that get the best seasonal cycle of rainfall. I rank the models, give them a score, and take the top 10 scoring models for that metric. Then somebody else says, no, I think it's more important that they get the annual mean right or they get the inter-annual variability (the variability from one year to another). Well, I can do that same ranking. It turns out that if I do the ranking for three different metrics — there is nothing that says that one metric is better than the other — I end up with ten completely different rankings.

Not only are the rankings uncorrelated to one another, but depending on the metric, the projections, the estimates that you get going into the future turn out to be uncorrelated to the score, as well. I get the same spread if I take the top ten models over here as I had for the whole set. There will still be some positive ones and there will still be some negative ones when, for instance, I look at projected rainfall in the American Southwest.

This is a real problem. How do you deal with these models in an intelligent way? Which information from the observational record, either over the 20th century or longer, can you use to test whether the models have any skill in their predictions? This is what I spent all of my time on: trying to find ways to constrain the models to improve the Bayesian subjective probability that they are telling you anything of use. It's not that we have been working in a complete vacuum for the last 30 years — these models are relatively mature and people have been thinking about these issues ever since the beginning.

There are lots of examples in the current climate with which you can demonstrate that the models have skill. Say, the response to the eruption of Mount Pinatubo in 1991 in the Philippines. The volcano released a huge amount of sulfur dioxide and sulfate aerosols into the atmosphere, which spread around the stratosphere and stayed for about two to three years. These aerosols are reflective; they are white.

So the sun gets reflected out. These aerosols acted as a kind of sunshade over the planet that caused the planet to cool. Our group (though this is before my time), before this cooling happened, did the calculations with their model, and predicted that the cooling would reach a maximum of about half a degree in about two years time. Lo and behold, such a thing happened.

If you go back, and we had lots of information about what happened over that period — what happened to radiation at the top of the atmosphere, what happened to the winds that changes the function of the temperature gradients in the lower stratosphere, what happened to water vapor — we can see whether the models got the right answer for the right reason, and for the most part they did. So that was a good real prediction in real time that could be tested in a short amount of time.

The problem with climate prediction and projections going out to 2030 and 2050 is that we don't anticipate that they can be tested in the way you can test a weather forecast. It takes about 20 years to evaluate because there is so much unforced variability in the system — the chaotic component of the climate system — that is not predictable beyond two weeks, even theoretically. This is something we can't really get a handle on. We can only look at the climate problem once we have had a long enough time for that chaotic noise to be washed out, so that we can see that there is a full signal that is significantly larger than the inter-annual or the inter-decadal variability. This is a real problem because society wants answers from us and won't wait 20 years.

We did this 20 years ago and the predictions that we made then have been more or less validated, given both the imperfections we had at the time and the uncertainty in how we thought things would change in the future. So there is a track record that shows that these models are realistic.

But the questions that were being asked 20 years ago were relatively simple compared to the questions that are being asked now. The issue of climate change has become tied to many other questions, such as biosphere degradation, habitat loss, over-development, inappropriate development, energy security, etc. All of these questions are much more immediate and acute than climate change as a whole.

Yet climate change has a very strong impact on how you might deal with a lot of these issues. Society is not willing to wait for the scientists to say "come back in 20 years and we will tell you whether our predictions are any good or not." It's tricky. People want answers and we need to validate those answers but we have to do it in a way that is not the standard 'make a prediction, test it; make a prediction, test it'. The time scales are too long.

With climate and any observational science, as opposed to a laboratory science, there is history. Essentially there are 4.5 billion years of earth history, of which we know increasingly less the further back we go. But we do know a fair bit about how climate has changed in the more recent past.

We know about the ice ages 20 thousand years ago. We know about oscillations in the ocean circulation that happened around 8,000 years ago. We know that 6,000 years ago the Sahara was much wetter than it is now. We have theories for why all of these things happened, based on our knowledge of planetary dynamics — how the orbit changed in that time period, how the deglaciation (the melting of the big ice sheets from about 20,000 years ago to about 8,000 years ago) proceeded. We have clues in ice core records, in the continual uplift of where the ice sheets used to be, in drainage pathways of the paleo great lakes that existed at that time. We can see where the beaches were.

There are a lot of clues in the landscape, in the geology, in the soils, in the sea, in the mud, in the ice, in tree rings, in corals, that give us clues as to how things changed in the past. But all of these clues are very indirect. They're not real thermometers. They're not rain gauges. They're not satellites. They are telling us things that are connected to climate, but are not really the same as climate. Interpreting them has always been problematic because they are often a function of not one particular thing that is changing the climate, but maybe four or five different things, all of which are changing in different ways at different times.

Over the last five years or so, we have made an enormous effort to make the climate models we use much more complete. We used to have the atmospheric circulation and the water cycle — the key elements to the climate system. But there is a lot more going on. There is mineral dust. There are aerosols and these aerosols interact with the clouds, with radiation, with atmospheric chemistry, with air pollution and other kinds of emissions to produce ozone (also a greenhouse gas, but which is generated within the atmosphere rather than being emitted as a pollutant). These aerosols and the other elements of atmospheric composition make the whole problem much more complicated and add huge numbers of extra pathways that allow temperature changes or hydrological cycle changes or wind changes to intersect with greenhouse gases and temperatures and the like.

The neat thing is that these same chemicals are also very closely related to what we measure in ice cores and in mud at the bottom of the ocean. We can measure dust records in the ice cores, which tell us how much dust got to Greenland pretty much every year for the last 100,000 years. This tells us something about where the dust was coming from, and something about the atmospheric circulation, but dust is one variable that depends on many different inputs. Because we have now included it in the climate models, we can ask questions like, "given this hypothesis for why the climate changed at that point, does the simulated dust record that we would have gotten in our numerical virtual Greenland match up to what we see in the real Greenland?" Then we can go back and look at the climate changes and the ideas we have for why the climate changed in the past and evaluate how well the models do with really large changes in climate.

This is potentially much more useful than testing the models against the seasonal cycles today because you are testing against a real climate change as opposed to a proxy for climate change. Things that happen over the seasons are very different from things that happen due to an increase in carbon dioxide over time. They are very different physically. The time scales are different. There are different kinds of feedbacks.

But if you go back into the past, you can see those same long-term feedback effects that control what will happen in the future operating over a similar time period. The reason the Sahara was green 6,000 years ago is that we were a little bit closer to the sun during Northern Hemisphere summers because of the way the orbit of the earth works.

We're on an ellipse and there is a point where we are close to the sun, and there is a point where we are far away from the sun. Right now, we are closest to the sun in January; 6,000 years ago we were closest to the sun in August. So August is Northern Hemisphere summer and you are going to get warmer summers. As you have warmer summers, the thermal equator moves to the north, and the rain bands tend to follow that thermal equator, going much further into the Sahara than they would today. The models show that same sensitivity, which gives me hope that these models are actually telling us something realistic.

The problem is that most of the modeling groups don't do these kinds of experiments. Right now, we are in the midst of building a huge new database of model simulations that will be used for the next IPCC report. The IPCC (the Intergovernment Panel on Climate Change) is an assessment body that looks at everything in the scientific literature and comes up with an assessment of what it all means.

The community of climate modelers knows that these reports are coming up, and a few years beforehand they put together a database of simulations to look at, so that by the time the IPCC comes along and asks, "what is going on in the world of climate modeling?", there will be lots of information on hand about these different climate models.

We are working to make sure that within these sets of simulations, people are running their models for paleo-climate simulations, so that we can do exactly what I was alluding to — ask ourselves whether we can rate the models based on how well they do in the paleo-climate, and whether this indicates that models with low sensitivity or high sensitivity are better for the future. We are going to have a metric that is much closer to what we think we need to make up some kind of assessment of how credible we think the projections will be.

There are problems that attract a different kind of thinker: really complex problems such as the human body, or an individual cell, or the climate system or solar physics. These are subjects that don't fit into the same aesthetic that special relatively fits into. They demand that you deal with multiple conflicting and intersecting elements. They are horribly non-linear right from the word 'go'; they are horribly complex. There is never going to be a theory of climate that somebody will come up with just by thinking about how the climate should work. People have tried, but they all fall pretty much at the very first hurdle. It is 'irreducibly complex'.

And you can't get away from that. You can't think that the climate is going to yield by just thinking about it. It needs to be thought about and measured and analyzed and thought about again and measured and analyzed and all of these disparate elements have to be brought in together. The reason climate models have grown up to be as complicated and as complex as they are, is not due to a lack of imagination. It's because that is how the real world is and that is the way the field has made progress. It hasn't made progress through people sitting in a room coming up with theories for how climate should work.

The field has made progress because people have made complex assumptions; they have built these things into models of varying complexity, all the way to the GCMs, (the big climate models that I was talking about earlier); they have been tested against very complex data from satellites, from intense observation campaigns, from In-Situ observations.

At the same time, all these models are plagued with uncertainty and have the problem of changing measurements. The data collecting is always improving and our understanding of different processes is always growing. But that doesn't make things simpler; it makes things more complex. It means you need to add another element to your model. It means you have to measure everything again and run the model again.

In order to make progress in this field, you need to be the kind of scientist who embraces complexity. This is not a science for certain kinds of physicists who are only interested in aesthetically pleasing problems. It is too complex.
There are some very big questions that we face as climate scientists and some very specific problems that we need to approach. Here is an example of an analysis that I think will be very interesting.

Every individual storm in the mid-latitudes is different — each has a different shape, a different amount of rainfall, the clouds are different, etc. The ability of the climate model to reproduce exactly the same weather pattern that we have seen over time is just about zero. That is, trying to reproduce exactly what has happened in the right time sequence, season-by-season, day-by-day, is something we will not be able to do.

We are interested in what happens to the generic storm. There is enough similarity between all low-pressure systems that if you put them together, you would come up with a generic storm. It would have a lot of information that was common to all of those storms, but not the information that was unique to any one storm that happened to be in one particular configuration.

There is a constellation of satellites called the A-Train that is run by NASA: five polar orbiting satellites that fly in formation so that there is about a 20-minute distance from the first one to the last one. They are all pointing to pretty much the same point on the surface of the Earth that they are traversing.

They are measuring many different things: the temperature of the atmosphere, how many aerosols there are, the amount of sea ice, the amount of chlorophyll in the ocean below, the winds at the surface, etc. Each time they pass over, they will see a little bit of a storm, and the next day they might pass over it again, as it has shifted a little bit to the west. Collectively, over the seven or so years that these satellites have been orbiting, they have seen many different storms, which have very similar characteristics.

Wouldn't it be interesting to take that satellite data and make a composite storm based on the weather models that tell us where the storms were? You would take a time-space map of weather models that tracked the storms in a given area and see when the satellites were passing over storms. Collect all of that data and you would be able to come up with a statistically average storm for that area over the whole seven-year period. This would be a tremendously valuable tool that we could use to develop our models — to compare the ACTUAL average storm with the model's prediction.

You would think someone would have done this study, but nobody has. The satellite data is all there (it's only a few petabytes), all of that model information is there. Why hasn't the study been done? The reason is each individual data stream for each of those different individual instruments on the satellites is in a different place. The way that the time/space information has been collated for each of those instruments is different. There is no single portal that would allow you to filter that data without having to put the entire data set on your hard drive and sort it yourself. Of course you can't do that because it's petabytes of data and there is no hard drive that holds petabytes of data.

The same is true for the models: all of the models come in the same kind of box, but if you want the temperatures, you have to download the entire grid of temperatures for every month for the entire period. There is no intelligent filtering in situ, so, again, you have to download gigabytes and gigabytes of data in order to derive one small set of numbers.

That is the sort of problem we face. Using the satellite data to see how well the models predict an average storm, or the processes within it, is a completely sensible thing to do but it's completely impossible. I imagine it would take something like 100 man years to do it with the current data set as it is currently configured.

But this doesn't require a huge a leap in technology to solve. This is really just a processing problem. It's the kind of thing that falls in between the cracks because it is not an interesting enough research problem for a computer scientist to be interested in, but it's too large a task for a scientist to want to devote any time to. So you have this big grey area in between the cool research on networks/computer science/machine learning side, and what is needed to approach very interesting scientific questions that is far too large for any one individual or group of scientists to put together themselves. It just is not getting any attention and that is something that is going to be a defining quality for the information-rich 21st century — the gap between what is needed by certain groups of people and what is ready to be supplied by the groups of people who are doing things that are much cooler. The question I am asking is, how do you fill that in? How do you get funding agencies to understand that this might be a little pedestrian, but it is absolutely fundamental?

At the time Google had an initiative called Google Research Data Sets. I spent a lot of time talking to their people, presenting them with this problem, and challenging them to do something about it, but, in the end, they canceled the project.

So where do we go from here? How should scientists get involved in policy? When it comes to discussing what to do about climate change, it appears to be a fact of life that people will use the worst and least intelligent arguments to make political points. If they can do that by sounding pseudoscientific — by quoting a paper here or misrepresenting another scientist's work over there — then they will. This surprised me before I really looked into it. It no longer surprises me. I don't advocate for political solutions. If I do advocate for something, my advocacy is focused on having more intelligent discussions.

Five years ago I was less of a public persona in climate-science than I am now. But at the time, the voices that were being heard discussing climate change were at odds with the science. The Wall Street Journal was featuring attacks on scientists; Congress was filled with know-nothings; and, in the mainstream media, every time there was a story, you would have one of the five obligatory contrarians pop up and say "oh no, everything will be fine."

There was no public voice for the science community. There were a few scientists who would step out occasionally — Steve Schneider is one. But there was no community pushing to correct the record or to inform people about the actual scientific results. I began to dabble in public outreach, sending letters to the editor, writing the occasional Op-Ed, talking to journalists. It was all to little effect.

How can you improve the level of context? Can you provide people with resources that allow them to assess the argument — not whether or not a given policy is the right one, but whether there is an argument to justify such a policy? Some people on the policy side have decided as an a priori assumption that it is impossible to argue against somebody's position without arguing against their conclusion. I reject that fundamentally.

If people make a bad argument in order to support any policy, whether I agree with that policy or not, it is still a bad argument and they shouldn't use it. You can point out that it's a bad argument without its reflecting on the actual policy outcome. There are good arguments and bad arguments for most good policies. If we can just have the good arguments for the different policies battling it out and not have to worry about the bad arguments, then we might make progress. Okay, so that is obviously naive because when we are talking about politics the idea that we can have more elevated conversations in this information-rich world is something that may be little more than a pipe dream. But it is something I believe is worth striving for.

Over the past five years, I have spent a lot of time building up resources. We spend a lot of time building background for journalists, staffers, and for science advisors of various kinds. We're building up resources that people can use so that they can tell what is a good argument and what is a bad argument. And there has been a shift. There has been a shift in the media; there has been a shift in the majority of people who advise policymakers; there has been a shift in policymakers. This kind of effort — and not just by me, but also by other equally concerned people — has had the effect of elevating the conversation.

This leads to maybe the final question that I think about, which is, "how do you increase the signal-to-noise ratio in communication about complex issues?" We battle with this on a small scale in our blog's comment thread. In un-moderated forums about climate change, it just devolves immediately into name-calling. It becomes very difficult discuss science, to talk about what aerosols do to the hydrological.

The problem is that the noise serves various people's purposes. It's not that the noise is accidental. When it comes to climate, a lot of the noise is deliberate because if there's an increase of noise you don't hear the signal, and if you don't hear the signal you can't do anything about it. Increasing the level of noise is a deliberate political tactic. It's been used by all segments of the political spectrum for different problems. With the climate issue in the US, it is used by a particular segment of the political community in ways that is personally distressing. How do you deal with that? That is a question, which I am still asking myself.

What's Next?
Dispatches on the Future of Science

Max Brockman, ed. | What's Next? Dispatches on the Future of Science (Vintage)
Written by Sarah Boslaugh
Wednesday, 01 July 2009

Each essay is self-contained, making it possible to choose those most relevant to your own interest


If your favorite day of the week is Tuesday, because that's when the Science section of The New York Times is published, and your favorite NPR show is Ira Flatow's Science Times, then you'll love What's Next? Dispatches on the Future of Science, a collection of essays written by young scientists about what they do and how they see the future of their fields. Even if you're not quite that much of a science geek, if you have an interest in the world around you and the process by which scientific research can both explain and mold that world, you'll enjoy this collection edited by Max Brockman. No expertise in any field is required to understand these essays; if you can follow Malcolm Gladwell, you'll have no troubles with What's Next?

Brockman's essayists represent a variety of fields, from physics to paleoanthropology, with a heavy leaning toward the human sciences. This is a good choice from the marketing point of view, since non-scientists tend to be more interested in topics relating to human psychology than, say, the role played by dark energy in accelerating the expansion of the universe, but fans of hard science may feel slighted. That objection aside, this is the perfect collection for people who like to stay up on recent scientific research but haven't the time or expertise to go to the original sources (which, in the case of modern science, usually means articles published in professional journals, which are not generally available to those without access to an academic library).

Each essay is self-contained, making it possible to choose those most relevant to your own interests. And it's a great airplane or beach book because you can read the essays in any order; each is brief enough to be read between the interruptions of gate announcements or children demanding attention. My personal favorite is "What Makes Big Ideas Sticky?" by UCLA psychologist Matthew Lieberman, which argues that ideas which mirror the structure and function of the human brain may seem so obviously true to us that they resist being discarded, even in the face of overwhelming amounts of scientific research demonstrating their lack of merit.

The collection closes with an essay by NASA climatologist Gavin Schmidt entitled "Why hasn't specialization led to the Balkanization of science?" He argues that in contradiction to the stereotype of the scientist as someone who knows more and more about less and less, interdisciplinary research is central to modern science and describes both the factors which lead to greater isolation among fields of research, and those which encourage cooperation and sharing of ideas. Communication of major ideas in nontechnical language is one of the factors which encourages cooperation, and What's Next? represents an important contribution to that effort. | Sarah Boslaugh

256 pages. $14.95 (paperback)

Beyond Edge

"What To Read Now. And Why —Newsweek's Fifty Books For Our Times": 4. The Big Switch By Nicholas Carr. 17. The Trouble With Physics By Lee Smolin. 39. Why Evolution Is True By Jerry A. Coyne [...]

Malcolm Gladwell tales on the digital determinists re Chris Anderson's Free [...]

Lera Boroditsky: Language pervades the deepest domains of thought, shaping us from the nuts and bolts of perception to our loftiest abstract notions and major life decisions... more» [...]

"Dear Malcolm: Why so threatened?" Chris Anderson Responds to Gladwell [...]

Geoffrey Miller's "most expensive" and "happiness" lists [...]

"Lee Smolin argues against the timeless multiverse" [...]

*Check for more outside the box try www.edge.org, search unthinkable. [...]

Jerry Coyne asks"Which theology should we respect?" [...]

PZ Myers: "I may not be perfectly rational, but my magic invisible monkeys are!" [...]

Sputnik is back. This time it's Jonathan Harris, not the Russians. Don't miss it: The Sputnik Observatory. Edgy. [...]

"Dawkins sets up kids' camp to groom atheists." [...]

Sean Carroll: Many older scientists do all sorts of crazy things [...]

The engrossing essay collection which offers a youthful spin on some of the most pressing scientific issues of today—and tomorrow...Kinda scary? Yes! Super smart and interesting? Definitely. The Observer's Very Short List

"A captivating collection of essays ... a medley of big ideas." — Amanda Gefter, New Scientist

"The perfect collection for people who like to stay up on recent scientific research but haven't the time or expertise to go to the original sources." — Playback.stl.com

Dispatches on the Future of Science
Edited By Max Brockman

If these authors are the future of science, then the science of the future will be one exciting ride! Find out what the best minds of the new generation are thinking before the Nobel Committee does. A fascinating chronicle of the big, new ideas that are keeping young scientists up at night. Daniel Gilbert, author of Stumbling on Happiness

"A preview of the ideas you're going to be reading about in ten years." — Steven Pinker, author of The Stuff of Thought

"Brockman has a nose for talent." — Nassim Nicholas Taleb, author The Black Swan

"Capaciously accessible, these writings project a curiosity to which followers of science news will gravitate." — Booklist

"For those seeking substance over sheen, the occasional videos released at Edge.org hit the mark. The Edge Foundation community is a circle, mainly scientists but also other academics, entrepreneurs, and cultural figures. ... Edge's long-form interview videos are a deep-dive into the daily lives and passions of its subjects, and their passions are presented without primers or apologies. The decidedly noncommercial nature of Edge's offerings, and the egghead imprimatur of the Edge community, lend its videos a refreshing air, making one wonder if broadcast television will ever offer half the off-kilter sparkle of their salon chatter." — Boston Globe

Mahzarin Banaji, Samuel Barondes, Yochai Benkler, Paul Bloom, Rodney Brooks, Hubert Burda, George Church, Nicholas Christakis, Brian Cox, Iain Couzin, Helena Cronin, Paul Davies, Daniel C. Dennett, David Deutsch,Dennis Dutton, Jared Diamond, Freeman Dyson, Drew Endy, Peter Galison, Murray Gell-Mann, David Gelernter, Neil Gershenfeld, Anthony Giddens, Gerd Gigerenzer, Daniel Gilbert, Rebecca Goldstein, John Gottman, Brian Greene, Anthony Greenwald, Alan Guth, David Haig, Marc D. Hauser, Walter Isaacson, Steve Jones, Daniel Kahneman, Stuart Kauffman, Ken Kesey, Stephen Kosslyn, Lawrence Krauss, Ray Kurzweil, Jaron Lanier, Armand Leroi, Seth Lloyd, Gary Marcus, John Markoff, Ernst Mayr, Marvin Minsky, Sendhil Mullainathan, Dennis Overbye, Dean Ornish, Elaine Pagels, Steven Pinker, Jordan Pollack, Lisa Randall, Martin Rees, Matt Ridley, Lee Smolin, Elisabeth Spelke, Scott Sampson, Robert Sapolsky, Dimitar Sasselov, Stephen Schneider, Martin Seligman, Robert Shapiro, Clay Shirky, Lee Smolin, Dan Sperber, Paul Steinhardt, Steven Strogatz, Seirian Sumner, Leonard Susskind, Nassim Nicholas Taleb, Timothy Taylor, Richard Thaler, Robert Trivers, Neil Turok, J.Craig Venter, Edward O. Wilson, Lewis Wolpert, Richard Wrangham, Philip Zimbardo

[Continue to Edge Video]

Edited by John Brockman
With An Introduction By BRIAN ENO

The world's finest minds have responded with some of the most insightful, humbling, fascinating confessions and anecdotes, an intellectual treasure trove. ... Best three or four hours of intense, enlightening reading you can do for the new year. Read it now."
San Francisco Chronicle

"A great event in the Anglo-Saxon culture."
El Mundo

Praise for the online publication of
What Have You Change Your Mind About?

"The splendidly enlightened Edge website (www.edge.org) has rounded off each year of inter-disciplinary debate by asking its heavy-hitting contributors to answer one question. I strongly recommend a visit." The Independent

"A great event in the Anglo-Saxon culture." El Mundo

"As fascinating and weighty as one would imagine." The Independent

"They are the intellectual elite, the brains the rest of us rely on to make sense of the universe and answer the big questions. But in a refreshing show of new year humility, the world's best thinkers have admitted that from time to time even they are forced to change their minds." The Guardian

"Even the world's best brains have to admit to being wrong sometimes: here, leading scientists respond to a new year challenge." The Times

"Provocative ideas put forward today by leading figures."The Telegraph

The world's finest minds have responded with some of the most insightful, humbling, fascinating confessions and anecdotes, an intellectual treasure trove. ... Best three or four hours of intense, enlightening reading you can do for the new year. Read it now." San Francisco Chronicle

"As in the past, these world-class thinkers have responded to impossibly open-ended questions with erudition, imagination and clarity." The News & Observer

"A jolt of fresh thinking...The answers address a fabulous array of issues. This is the intellectual equivalent of a New Year's dip in the lake—bracing, possibly shriek-inducing, and bound to wake you up." The Globe and Mail

"Answers ring like scientific odes to uncertainty, humility and doubt; passionate pleas for critical thought in a world threatened by blind convictions." The Toronto Star

"For an exceptionally high quotient of interesting ideas to words, this is hard to beat. ...What a feast of egg-head opinionating!" National Review Online

Today's Leading Thinkers on Why Things Are Good and Getting Better
Edited by John Brockman
Introduction by DANIEL C. DENNETT


"The optimistic visions seem not just wonderful but plausible." Wall Street Journal

"Persuasively upbeat." O, The Oprah Magazine

"Our greatest minds provide nutshell insights on how science will help forge a better world ahead." Seed

"Uplifting...an enthralling book." The Mail on Sunday

Today's Leading Thinkers on the Unthinkable
Edited by John Brockman
Introduction by STEVEN PINKER


"Danger – brilliant minds at work...A brilliant bok: exhilarating, hilarious, and chilling." The Evening Standard (London)

"A selection of the most explosive ideas of our age." Sunday Herald

"Provocative" The Independent

"Challenging notions put forward by some of the world's sharpest minds" Sunday Times

"A titillating compilation" The Guardian

"Reads like an intriguing dinner party conversation among great minds in science" Discover

Today's Leading Thinkers on Science in the Age of Certainty
Edited by John Brockman
Introduction by IAN MCEWAN


"Whether or not we believe proof or prove belief, understanding belief itself becomes essential in a time when so many people in the world are ardent believers." LA Times

"Belief appears to motivate even the most rigorously scientific minds. It stimulates and challenges, it tricks us into holding things to be true against our better judgment, and, like scepticism -its opposite -it serves a function in science that is playful as well as thought-provoking. not we believe proof or prove belief, understanding belief itself becomes essential in a time when so many people in the world are ardent believers." The Times

"John Brockman is the PT Barnum of popular science. He has always been a great huckster of ideas." The Observer

"An unprecedented roster of brilliant minds, the sum of which is nothing short of an oracle—a book ro be dog-eared and debated." Seed

"Scientific pipedreams at their very best." The Guardian

"Makes for some astounding reading." Boston Globe

"Fantastically stimulating...It's like the crack cocaine of the thinking world.... Once you start, you can't stop thinking about that question." BBC Radio 4

"Intellectual and creative magnificence" The Skeptical Inquirer






"deeply passionate"









Edge Foundation, Inc. is a nonprofit private operating foundation under Section 501(c)(3) of the Internal Revenue Code.

John Brockman, Editor and Publisher
Russell Weinberger, Associate Publisher

contact: [email protected]
Copyright © 2009 By Edge Foundation, Inc
All Rights Reserved.