Edge 341 — March 28, 2011
The Third Culture
TOOLS FOR THINKING
Science offers some help in the everyday as we navigate the currents of this world.
A few months ago, Steven Pinker of Harvard asked a smart question: What scientific concept would improve everybody's cognitive toolkit?
The good folks at Edge.org organized a symposium, and 164 thinkers contributed suggestions. John McWhorter, a linguist at Columbia University, wrote that people should be more aware of path dependence. This refers to the notion that often "something that seems normal or inevitable today began with a choice that made sense at a particular time in the past, but survived despite the eclipse of the justification for that choice. ...
... Daniel Kahneman of Princeton University writes about the Focusing Illusion, which holds that "nothing in life is as important as you think it is while you are thinking about it." He continues: "Education is an important determinant of income — one of the most important — but it is less important than most people think. If everyone had the same education, the inequality of income would be reduced by less than 10 percent. When you focus on education you neglect the myriad of other factors that determine income. The differences of income among people who have the same education are huge." ...
... Public life would be vastly improved if people relied more on the concept of emergence. Many contributors to the Edge symposium hit on this point.
We often try to understand problems by taking apart and studying their constituent parts. But emergent problems can't be understood this way. Emergent systems are ones in which many different elements interact. The pattern of interaction then produces a new element that is greater than the sum of the parts, which then exercises a top-down influence on the constituent elements.
Culture is an emergent system. A group of people establishes a pattern of interaction. And once that culture exists, it influences how the individuals in it behave. An economy is an emergent system. So is political polarization, rising health care costs and a bad marriage.
Emergent systems are bottom-up and top-down simultaneously. They have to be studied differently, as wholes and as nested networks of relationships. We still try to address problems like poverty and Islamic extremism by trying to tease out individual causes. We might make more headway if we thought emergently.
We'd certainly be better off if everyone sampled the fabulous Edge symposium, which, like the best in science, is modest and daring all at once.
Further Reading: The Edge Question 2011 (January 20, 2011): What Scientific Concept Would Improve Everybody's Cognitive Toolkit?
Essays & Opinions: The Japanese earthquake reveals once again our inability to predict events. Or our ability to avoid thinking about predictable events. John Brockman of Edge investigates... more»
CULTURE, RISK AND THE ATOM
There are many worthwhile discussions underway exploring whether it’s possible to avoid a culture of complacency and transparency around nuclear power generation.
They echo discussions following the losses of two space shuttles, the Gulf of Mexico oil gusher and many other realms where low-probability, worst-case outcomes, or unknown unknowns, can up-end the greatest engineering achievements at great cost in lives or wealth.
Below I point to a couple of starting points. Please provide links to more.
Also, I encourage you to read a new story in The Times exploring how risks that appear incalculably small, in theory, can produce unwelcome surprises in complex, consequential systems (in this case nuclear plants). Tom Zeller, one of the reporters for that story, also filed a Green Blog post looking at earthquakes and nuclear plants, from Diablo Canyon to Indian Point.
Another important article, “Japanese Rules for Nuclear Plants Relied on Old Science,” shows how efforts to safeguard nuclear plants in Japan lagged way behind science pointing to the need to account for tsunami damage and flooding.
- Kennette Benedict, the publisher of the Bulletin of the Atomic Scientists, has written the latest essay in an ongoing string on the Fukushima crisis — “The road not taken: Can Fukushima put us on a path toward nuclear transparency?”
- At Edge.org, the intellectual impresario John Brockman has solicited a heap of feedback on unpredictability, which includes a particularly interesting piece by Eiko Ikegami, a sociologist at the New School who is a student of Japanese culture and its ramifications. Here are a couple of excerpts, contrasting how certain Japanese traits have facilitated the response to the chaos created along coastlines by the tsunami, but probably exacerbated the conditions leading to the ongoing nuclear emergency at Fukushima (I’ve adjusted a couple of spellings to conform with Times style):
WHEN WE CANNOT PREDICT
About a year ago, on Wednesday April 14th, I was on the way to London from JFK, when the pilot announced a slight delay into Heathrow in order to avoid the ash cloud coming out of the Icelandic volcano eruption. This was the first time I paid any attention to the subject. But once in London that is the only subject anybody talked about for a week.
"Something is going on here that requires serious thinking," I wrote on these pages. "We've had earthquakes before, and we've had plane stoppages, but nothing like the continuing effects of the ash cloud." The result was an Edge Special event on "The Ash Cloud". I asked the following question:
It's already clear that the earthquake and tsunami that hit northern Japan is the latest tragic example of our inability to predict when it matters most.
What can the Edge community bring to the table?
"Risks are always interesting," writes George Dyson," especially in this case where you have such a mix of probabilities — the earthquake/tsunami that most agree was unpredictable, if inevitable, and the nuclear power plant that some people think was entirely safe, and some people believe was entirely unsafe. So you need to frame this in terms of risk, without getting bogged down in the debate about nuclear power, that may go on forever, certainly long enough to drive people away from Edge."
"The question of preference for different kinds of fate — death by drowning vs death by radiation; death by enemy fire vs friendly fire, etc; tolerance for automobile fatalities because they are "accidents" — is at the heart of this, and you have a lot of people at hand with something to say about that."
To start things off, Edge asked Bruce Parker, Former Chief Scientist of the National Ocean Service in NOAA, and author of The Power of the Sea, to write the lede essay on risk in light of northern Japan earthquake and tsunami. We expect to continue this project over the next few weeks.
WHEN WE CANNOT PREDICT
Prediction is the very essence of science. We judge the correctness a scientifictheory by its ability to predict specific events. And from a more real-world practical point of view, the primary purpose of science itself is to achieve a prediction capability which will give us some control over our lives and some protection from the environment around us. To avoid the dangers of the world we must be able to predict where and especially when they will happen.
While the scientific method may lead us to a reasonably thorough understanding of some phenomenon, unfortunately that does not always translate into an accurate practical prediction capability that, for example, might help us avoid being killed by a natural disaster. When that is the case, we then find ourselves talking about risk, the likelihood that some dangerous event will take place, even though we do not know when. Risk assessment is necessitated by an inability to predict. That inability to predict may come from some deficiency in our knowledge, or it may be the result of a great complexity inherent in the phenomenon (for example, we may not have high-enough-resolution data to represent it, or the process may have a chaotic component that keeps us from determining exactly when it will occur). We are then left only with probabilities.
Along the way to understanding a natural phenomenon we, of course, develop and employ various types of technology. Such technology is typically used to measure the phenomenon and thus provide the data that will stimulate the analytical human mind to develop appropriate scientific theories. More data are then used to test those theories. Ultimately technology will also (hopefully) take the form of a warning system, a computer model (representing an accepted scientific theory) that uses real-time data. In the meantime, other technology will improve the methods of protection against such disasters.
The tsunami that struck northern Japan (where the death toll will likely surpass 25,000) is the latest tragic example of our inability to predict when it matters most. The tsunami’s arrival at coasts more distant than Japan was accurately predicted by hydrodynamic computer models, once the location of the submarine earthquake was determined and the generation of a tsunami was confirmed by real-time data from DART buoys and tide gauges. (Such a confirmation is required because most submarine earthquakes do not produce tsunamis and the numerous false alarms that would result from warnings based only on the occurrence of a submarine earthquake would make the warnings useless.) But when the epicenter is so close to the coast that the tsunami arrives only 30 minutes after the earthquake, the only possible warning is a receding ocean prior to the tsunami or the earthquake itself (when a coast shakes for a long time, one is wise to play it safe and act as if a tsunami will be coming very soon). The Japanese are the most tsunami aware people on Earth, and they did immediately run to roof tops and inland. But 30 minutes is not a very long time. (It was even worse in northwest Sumatra in 2004, when an even larger tsunami struck only 15 minutes after the initial earthquake.)
The only way that a more advanced tsunami warning could have been given is if the earthquake itself could have been predicted. But we cannot predict when an earthquake will strike, not the day, or the month, or the year, or even the decade. All we can do is assign a risk to particular regions. Japan, with its numerous tectonic plates butting up against each other, is known to be a high risk area; many earthquakes and tsunamis have occurred there before. As a result, some sea walls had been built and some buildings had been made stronger. Technology contributed to those defenses. But they were not enough, and in fact, could never be enough, without huge sums of money being spent to build 40-foot sea walls along almost the entire Japanese coastline and to make all buildings capable of surviving the very rare 9.0 earthquake.More effective would be to pour a small fraction of that cost into additional earthquake prediction research (an earthquake prediction capability would have saved even more lives in Haiti). With the great complexity of the worldwide tectonic environment, understanding what makes two tectonic plates suddenly release each other, much less being able to predict when earthquakes will occur using a detailed geophysical model, is still very far off. But using technology to continuously measure the various signals that the solid Earth provides, until we find signals that only come in advance of an earthquake, may be possible a lot sooner. Based on past accomplishments we can be justified in being optimistic that human intellect will someday find a way to predict when an earthquake will happen. But we need to speed up that process, because in the future more lives will be at stake. Whatever additional funds are required to make that happen would certainly be money well spent.
Passing a Worst-Case Scenario Test
The 2011 Sendai earthquake and tsunami caused great damage to all energy and infrastructure systems.
One irrigation dam ruptured, its flood killing at least four people. Six other dams show signs of cracking.
Two oil refineries was set on fire by the quake, one in Ichihara and one in Sendai. Others were taken offline to check for damage.
Four nuclear power reactors were damaged. No deaths due to radiation have been reported so far.
The failure of a dam and its cost of life have not caused second thoughts about the risks and appropriate use of hydroelectricity. The failure of oil refineries have no elicited second thoughts about power from oil. But the failure of nuclear reactors have caused great anguish and second thoughts about nuclear power, despite its comparable harm.
Nuclear power has real and known risks. Coal has real and known risks. Solar and wind have real and less known risks. The risks of one option have to be weighed, not against nothing, but against the risks of other options. Nuclear is looking pretty good.
"Atomic energy has just been subjected to one of the harshest of possible tests, and the impact on people and the planet has been small. The crisis at Fukushima has converted me to the cause of nuclear power," says George Monbiot in the UK's liberal paper Guardian.
OK, that congratulations of passing the test may yet be a bit premature, but nuclear would have to kill a lot more to catch up to coal. Unfortunately, even smart people are running from nuclear and plunging back into coal and oil as the only other immediately available economic alternative. Mark Lynas notes, "Having shut down its nukes, Germany is already importing much more coal from the US and other countries, as is Japan."
Richard Rhodes one of the foremost experts on nuclear weapons, wrote a Pulitzer Prize-winning history of the effects of atom bomb and nuclear weapons, now in its fourth volume. He notes a curious effect of this re-evaluation of nuclear power:
All Energy Disasters Lead to Coal, Which Is an Energy Disaster
Simply looking at the loss of human life day to day, coal and oil are a disaster.
As per this Swedish report on the health effects of power generation. When tallied as deaths per tera watts per hour (deaths/TWh) coal and oil dominate while nuclear is minimal:
But what about black swan events? Say a 9.0 earthquake and tsunami on the cost demolishing some old nuclear reactors? Might nuclear radiation be so severe that it would wipe off life from the planet, or at least fry thousands, and eliminate its considerable lead in saftey? It is possible, but not likely.
For a better understanding of the relative strength of radiation and its health consequences, this "powers of ten" radiation chart by xkcd is very illuminating. Everyone might want to stop getting x-rays. But we won't because we reckon the costs of not getting x-rays.
What technology wants is a diversity of energy sources. Having passed a harsh worst-case scenario test, nuclear will be part of that mix.
There is a reason to be particularly interested in events that we know with some certainty will occur, but for which we cannot set a date. Other kinds of events, beyond natural disasters, fit this definition. Yes, "the big one" will hit California, although we do not know when. But this is true of human upheavals as well. It was clear in the early 70s that the Pahlavi regime in Iran would eventually be overthrown, but there was no way to predict when that would happen, so no one was ready.
This is not a new kind of problem — on the contrary, it is a problem that has characterized the human condition since we began to recognize sequence and to make predictions, the knowledge that each individual has that he or she will die, and the inability to know when. Except for a few eras such as when Egyptian pharoahs prepared their tombs long before their deaths, or during the middle ages when memento mori was constantly reiterated, we have been skilled at avoidance and denial. Our ability to avoid thinking about predictable events that have no predictable date is based on millennia of practice.
In order to improve prediction we need to see such events as recurrent or cyclical, rather than seeing them as unique How many humanitarian disasters on average occur per year (and is global warming accelerating their incidence)? What rate of nuclear incidents is tolerable? Has anyone noticed how often we go to war — it is no longer an unusual event. How often do revolutions spread from country to country as they did in the nineteenth century and recently in the Middle East? (for each person sees his or her own death as in some sense unique). These questions allow relief agencies to stockpile supplies, but asking these questions depends on moving disasters, even the greatest ones, out of the category of the unique — which in turn may make them more tolerable. We are close to becoming inured to disaster, which may be the cost of prediction.
It's too easy to discount intuition. Pattern recognition. The almost literary sensibility through which we make sense of our world.
The narrative implicit in the nuclear plant disaster in Japan is just too striking for most humans to ignore: the nation that suffered an atomic bombing is now enduring a nuclear crisis. A particular kind of scientific orthodoxy refuses to even entertain such parallels except as evidence of psychological or cultural biases clouding what should be our reliance on the data.
But, as black swan events like this prove, our reliance on the data continually fails us. We just can't get enough data about our decidedly non-linear world to make accurate predictions. There are just too many remote high leverage points in the chaotic systems constructing our reality for us to take all of them into account. Things that seemed not to matter — or that we didn't even notice — iterate enough until they end up mattering a lot. We're better off looking at a fractal and intuiting its relevant patterns than relying on its various pieces to tell us its unfolding story. Science too often divides to understand, incapable of even acknowledging there might be a science in divining to the same ends.
The coincidence of nuclear crises in Japan, combined with our inability to predict the events that precipitated it, forces another kind of predictive apparatus into play. No, it's not one we like to engage — particularly in rational circles — but one we repress at our own peril. Science is free to promote humanity's liberation from superstition or even God, but not from humanity itself. We still have something in common with all those animals who somehow, seemingly magically, know when an earthquake or tsunami is coming and to move to higher ground.
And our access to that long lost sense lies in something closer to story than metrics. A winter bookended by BP's underwater gusher and Japan's radioactive groundwater may be trying to speak to us in ways we are still human enough to hear.
How to Teach Prediction
Prediction is one of the main cognitive processes. Children regularly make predictions.
These three worlds, the social, the physical, and the mental, are at the center of what adults continue to learn to make predictions about. We predict the speed of an oncoming car and decide whether we can cross the street safely. We predict events that will make us happy or sad, such as taking a nice vacation, or playing a game, or a good meal, or establishing a relationship with another person.
To learn to predict well, one needs to be educated about how to predict and one needs to make predictions and to examine what went wrong when those predictions fail. Curiously the schools teach none of this.
How do children learn to predict? They learn as events happen randomly in their lives. If they are lucky enough to have someone helpful to talk with about their experiences they may, in fact, become good at analyzing how the world works and making their predictions conscious. Getting better at prediction is the cornerstone of living one's life in a satisfying way. One can, of course get better at prediction by simply thinking about it, this is how most people do this today of course. But not everyone is capable of doing that and, clearly, most adults are not all that good at making important predictions in their own lives. (This is one reason that there are bad marriages, financial counselors, clinical psychologists, and prisons.)
The idea that kids can make predictions is not a really radical point. My point is that prediction has to be the curriculum and not be ancillary to the curriculum. If we want adults to predict well, we need to help children do it well. As it stands now they are on their own. As adults who have not been taught to predict well, they will make poor life decisions, predicting wrongly about how people and things in their lives (bosses, spouses, children, co-workers, nuclear reactors etc.) will behave towards them after they take certain actions for example.
There are three aspects of prediction: learning a script; functioning without a script because it isn't known; and predicting when there is no script.
How do we teach prediction when there is no script and there are no seemingly relevant prior cases? In some sense you can't. You can teach people how to go about trying to make predictions. This is actually what science is about.
Scientists create theories which make predictions which they then try to verify with evidence. This process — hypotheses verified by evidence can be taught in the sense that it is a way of thinking that can be practiced in various venues and should be practiced in first grade. It is reasonable to start teaching children to think in this way about the world around them. We need to teach children to do scientific reasoning, not to memorize facts about science.
Earthquake Prediction and Tsunami Protection
There is increasing evidence that earthquakes can in fact be predicted. When rocks grind together under pressure, they give rise to a range of electromagnetic phenomena, including so-called "earthquake lights" (EQLs), that can be regarded as precursor phenomena of a forthcoming quake. Empirical data exist showing that such luminous displays often occur before earthquakes; indeed they have been reported since ancient times; in the case of the Saguenay quake in Canada (25 November 1988), such lights were reported 25 days before the event. (The meaning of these lights is controversial among seismologists.)
Another sign of an impending quake is a disturbance to the ultralow frequency radio band, which has been observed weeks before an earthquake. A third precursor signal is a magnetic field change in the vicinity of a forthcoming upheaval. All such phenomena — EQLs, radio disturbances, and magnetic field changes — can be and have been observed by satellite, and for this reason the French government has a earthquake-detection satellite in orbit: DEMETER (Detection of Electro-Magnetic Emissions Transmitted from Earthquake Regions) that has observed such precursor phenomena from space.
Some of these phenomena, furthermore, have been reproduced in the laboratory. For example, researchers have performed experiments in which rocks subjected to high pressures produce bursts of electrical discharges consistent with those observed before and during quakes.
Earthquakes are natural, not supernatural, phenomena. They result from known physical forces, stresses, and fractures within the Earth's crust. Further research on various types of precursor phenomena, together with the emplacement of a range of Earth-based and satellite sensors, might one day enable us to predict earthquakes with enough precision to make an early warning system possible and realistic.
In his essay, Bruce Parker says that technology "could never be enough, without huge sums of money being spent to build 40-foot sea walls along almost the entire Japanese coastline…" Well, societies spend huge sums of money routinely, often for projects that have little if any economic or scientific return or tangible benefit, for instance the building of the Great Pyramids at Giza.
But another large and ancient project, had it been deployed at the Japanese coastline, arguably could have prevented some of the destruction wrought by the Sendai tsunami: the Great Wall of China. The Great Wall extends for a distance of 8,851 km (5,500 mi.), and was built starting in the 5th century BCE. The coastline of the Japanese island of Honshu is some 5,450 km (3,386 mi.) long, with the Atlantic-facing side comprising about half of that distance. This means that the Great Wall could have almost doubled the length of Honshu's Atlantic coastline. If a structure built during ancient times could have provided some protection against a tsunami, it is likely that modern technology is fully up to the task of building a seawall that would be effective against the tsunami at Sendai. Whether or not to build it rests on factors, such as cost, political vision, and tolerance for risk, that are outside and independent of science and technology proper.
Periodic Natural Disasters Bad; Infectious Diseases Far, Far, Far Worse
The intellectual problem with our planet is that it was not intelligently designed. That leads to the physical problem — because Earth is an entirely natural accident it is a badly constructed death trap.
On the chronic level there is the horrific wastage of youth with — until the last couple of centuries — about half of children dying as kids, to the tune of about 50 billion premature deaths, largely from the diseases the wild west that is dumb ass evolution cooked up. On the periodic level there are your plate tectonics. In one sense moving plates are a good thing for people. If not for them constantly building up mountains the continents would have never formed much less still be around. But the system is fatally flawed. Had the plates been designed with any common sense their edges would glide past one another as smooth as silk. No one aside from geologists would notice them. They would not generate earthquakes. But plate boundaries we have to put up with tend to lock up, build up strain, and snap with bad results. When quakes occur on land it is bad architecture that causes most casualties. But underwater quakes can spawn tsunamis that when big enough overwhelm even well prepared coastal peoples. Underwater landslides and big meteorites hitting oceans make big waves too.
The problem with calculating the big periodic risks is that they are not really calculable. Nuclear plants are generally built to resist the maximum quakes and waves at significant risk of happening in its decades long operating span according to known geological and historical evidence. But some quakes don't even bother to occur at plate boundaries. The New Madrid superquake hit the Mississippi valley. That may have been a once in a millennium result of shifting continental buoyancy after the melting of the last Pleistocene ice sheet. Who knows what faults are out there? Or underwater slopes ready to slide. Build a lot of nuclear plants to resist normal local events, and it is inevitable that some of them will be hit by the locally rare and unpredicted local superquake or superwave. If that happens to coal fired or solar plants it is just a regional power loss. But fission plants are hyper-intense concentrations of radiation and heat just waiting to burst out with lots of nasty items that can profoundly disrupt entire regions. To better ensure safety would require that all nukes be built to survive very rare, unpredictable events, driving up their already high costs.
But as bad as quakes and tsunamis have been, they and other short-term disasters are not major people killers. It is estimated that tremblers have dispatched around 10 million in the last thousand years, a drop in the general premature mortality bucket. Expanding this a few fold will cover all the losses over human existence due to quakes, waves, volcanoes, fires, storms, floods, slides and the like. It is disease that has taken tens of billions, with famines making a major contribution.
Three Realities: Cultural Capital, Scientific Measurements, Political Denial
The magnitude of the earthquake that hit North-East Japan was 9.0, the fourth largest on record in modern times. The earthquake and tsunami risk was on the level of once in a thousand years, for this region: the last comparable earthquake in that region occurred in 869, according to historical and archeological evidence. In a shocking and totally unpredictable way, this disaster exhibited the hidden every-day culture of Japanese society. The dignified attitudes of survivors and the efficiency of their mutual help networks indicate the high level of civility instituted in the North-East of Japan. This tangible cultural capital will be an enormously positive asset for the social and economic recovery of the region.
In stark contrast, the very serious problems with the nuclear power plant in Fukushima have unearthed another, more problematic aspect of Japanese culture: their amazing ability of political denial in the face of scientific facts. I am gravely concerned that this disaster reveals the long-term practice of a hidden organizational culture related to the use of nuclear technology with a completely disfunctional regulatory system. It is truly alarming to see the government projecting to the public an imaginary third political reality, in which the first reality of the scientific radiation measurements is covered up in order to highlight the second reality of the social resilience of the Japanese public.
Risk-taking is an integral part of our every day life. Constructing and operating huge public projects involves the issue of appropriate levels of risk and budget considerations. But when combined with an organizational culture of structural secrecy and organizational conformity, it has grave consequences, in spite of all the courage of individual engineers, workers and fire fighters now toiling around the clock in life threatening circumstances at Fukushima Daiichi.
I am writing a longer essay with Junichiro Makino, professor at Tokyo Institute of Technology, who has been expressing his concerns in his Japanese blog on this issue since March 12th. His main point is that the situation is much closer to the Chernobyl disaster than the official understated governmental interpretation lets us believe. After two weeks, the ongoing problems at the Fukushima reactors are in line with Prof. Makino's dire predictions. Here are some salient points.
Already on March 17th, six days after the earthquake, following explosions at Fukushima's Daiichi nuclear power plant, the radiation level in the surroundings were comparable to that of Chernobyl. In one small village, Namie-machi, 30 km the plant, the radiation level was measured to be 170 microsievert/ hour. This value corresponds to an amount of I-131 of 1.3x10^14 Bq/km^2. This probably corresponds to an amount of Cs-131 of 3x10^12 Bq/km^2, roughly the same value as what was observed 30 km from Chernobyl. This measurement was taken by a team, reportedly led by the vice minister Kan Suzuki of Ministry of Education (MEXT).
Thus, already six days after the accident, it should have been clear that the amount of radioactivity released is comparable to that of Chernobyl. Even so, as of March 25, representatives of Tokyo Electric and NISA (Nuclear and Industrial Safety Agency) still stick to their estimate that the accident is at INES level 5, corresponding to 10,000 times less release of radioactivity than what was actually measured. This is inexcusable.
On March 22nd, DoE released data recorded from its Aerial Monitoring System, which shows that the heavily polluted area, with more than 125 microSievert/hour, was extended in the North-West direction of Fukushima-Daiichi over more than 30 km. On March 23, MEXT released results of their analysis of soil samples, 40 km North-West from the reactor, which, not surprizingly, was extremely high in both I-131 and Cs-137. According to the MEXT measurements, the amount of Cs-137 is 10-20% of that of I-137. So the level of pollution by Cs-137 might be as high as 2x10^13 Bq/km^2 at 30km distance.
In conclusion, the Japanese public, and the world at large, have been confronted with different realities: MEXT has been reporting a very high level of radioactivity since March 17th, comparable or ever higher than that of Chernobyl, and yet NISA and Tokyo Electric apparently refuse to accept what is going on. The one reality is based on scientific measurements. What can we say about the other "reality"?
It might look very strange to outside observers that there can be this large a discrepancy between what is so obvious and the official view. Unfortunately, in Japanese bureaucratic systems, such a situation is quite usual. For big projects like nuclear plants, there is of course an evaluation committee consisting of both specialists and scientists from wider fields. However, scientists critical to a project are quickly replaced by others who are willing to accept a more cozy relationship with the powers that be, with the result that such committees can easily wind up with a vast majority of dangerously uncritical members. In this unfortunate symbiosis of supportive scientists and bureaucrats, companies tend to lose an objective view of reality. In the case of the Fukushima disaster, it is hard for us to know whether they have really lost their grip on reality, or whether they only pretend to have done so. In the end, though, that does not matter much: the practical result in terms of clinging to an imaginary political reality is the same.
The only surprising thing is that even at this stage they are still behaving as though they have completely lost touch with reality. They continue the illusion of a relatively small accident that can be fixed in "just a few days", totally ignoring the reality of an ongoing Chernobyl-type situation. Yes, it is true that Chernobyl started with one huge explosion that caused the main contamination. But now that the Fukushima plant is producing a similar amount of contamination over a period of two weeks, with no end in sight, it is altogether possible that over the period of March and April the total contamination will significantly exceed that of Chernobyl.
The avian flu outbreaks in 2007, the spike in oil prices in the summer of 2008, the financial system meltdown later that year, the volcanic eruptions of 2009, the oil spill in 2010, and now the earthquake/tsunami/nuclear emergency of 2011, let alone the political upheavals in the middle east, all serve to remind us that we are very poor at making predictions. And we are apparently very poor at assessing risks. But perhaps we can make some observations.
All of these events, one way or another, have shown us how fragile our global supply chain and transportation network really is. There are already reports of how the recent earthquake, never mind the nuclear issues, have slowed down repairs to New York City subway stations, halted production of some GM automobiles in the US (an industry still recovering from the financial meltdown), and even rippled into my own hobby project of building a credible 19th century digital computer.
Our quest for squeezing every efficiency out of our systems of production and supply have lead us to fragility rather than robustness. We have gotten short term margins at the cost of long term stability. Our quest for every last basis point in our financial results have lead us to build a system with countless single points of failure. We are vulnerable to natural disasters, unforeseen economic disasters, or clever exploiters of our systems [such as governments cornering rare earth metals supply chains, or just plain opportunistic hedge traders on rather conventional metals (which is why all nickel based batteries rocketed in price three years ago).
The drumbeat of continued unexpected failures of nature, technology, or economics, will not go away. Perhaps, however, we can take lessons from the disruptions they cause, and find a way to monetize stability over maximum possible short term efficiencies, so that our constructed civilization will be more resilient to these events.
J. DOYNE FARMER
Viewing the Nuclear Accident in Japan Through the Lens of Systemic Risk
Predicting risk might sound like an oxymoron, but it isn't: We do it everyday. Everyone knows, for example, that the risk of a dangerous fall on a steep mountain trail is higher than it is on level ground. Prediction of risks is more difficult, however, when they are systemic. Systemic risks occur when individual components of a system interact and collectively generate new modes of behavior that would never occur for a single component in isolation, amplifying or generating new risks.
The recent financial crisis provides a good example. Banks normally manage risk under the assumption that the financial system will behave in the future more or less as it has in the past. Such estimates are based on historical losses. This is fine under normal circumstances. But in the recent financial crisis a small drop in housing prices triggered a chain reaction that suddenly made the financial system behave completely differently, and extrapolations of risk based on historical losses became irrelevant.
Systemic risks are hard to predict. They are inherently complex phenomena, typically involving nonlinear feedback that couples together the behavior of many individual components. Systemic risks frequently occur in systems where there are neither good models nor good measurements, where theory or simulation is impossible. They often involve modes of interaction that have not been seen before, making past experience of little value. The amplitude of the resulting problem is often far larger than previously imagined possible.
How can we anticipate and minimize systemic risk? The key general principle is stability. Systemic risks occur when bad behaviors feedback on one another, so that small problems are amplified into big problems. When things go topsy-turvy, do the problem behaviors damp out, or are they amplified? In the recent financial crisis, for example, the key problem was leverage, which amplifies both gains and losses. Leverage is good during good times, but during bad times it makes the financial system unstable.
The recent Japanese earthquake/tsunami provides another example of how a normal risk can turn into a systemic risk. For Japan, given the history of the region, an earthquake in tandem with a Tsunami might be called a normal risk. But no one realized in advance that a Tsunami could destroy both the main power and the backup power of a nuclear power plant, while an earthquake could also create cracks causing a loss of coolant. The resulting nuclear catastrophe came on top of all the other damage to the infrastructure, making the nuclear crisis even harder to solve than it would have been otherwise, and the radiation leakage has made it even harder to get the infrastructure functioning again. The risks of both have been amplified.
With hindsight the consequences of a large earthquake and tsunami seem obvious, so why didn't the engineers plan for them properly? This is the usual story with systemic risk: In hindsight the problems are obvious, but somehow no one thinks them through beforehand.
As already explained, from a complex systems engineering perspective, the key principle is stability. Nuclear power generation is intrinsically unstable. If you walk away from a wind generator or a solar cell when a crisis occurs, not much happens. If you walk away from a nuclear reactor under the wrong circumstances, it can melt down. To cope with the systemic risk one needs to think through all possible scenarios. The experts might be able to plan for all the known failure modes, but it is much harder to anticipate the unknown ones.
The prognosis for nuclear accidents based on simple historical extrapolation is disturbing. After roughly 14,000 cumulative years of nuclear plant operation, we have now had three major accidents. If we ramp up nuclear power by a factor of ten, which is necessary to make a significant contribution to mitigate global warming, we will increase from the 442 reactors that we currently have to about 5000. Historical extrapolation predicts that we should then expect an accident of the magnitude of the current Japan disaster about once a year.
But I don't trust the historical method of estimating. Three events are unlikely to properly characterize the tails of the distribution. My personal choice for a really nasty nuclear scenario goes as follows: Assume the developed world decides to ramp up nuclear power. The developing world will then demand energy independence and follow suit. For independence you need both reactors and fuel concentrators. There will be a lot of debate, but in the end the countries with stable governments will get them. With a fuel concentrator the waste products of the reactor can be used to make weapons grade fuel, and from there making a bomb is fairly easy. Thus, if we go down the path of nuclear expansion, we should probably assume that every country in the world will eventually have the bomb. The Chernobyl disaster killed the order of ten thousand people: A nuclear explosion could easily kill a million. So all it will take is for a "stable government" to be taken over by the wrong dictator, and we could have a nuclear disaster.
I'm not an actuary, so you shouldn't trust my estimates. To bring the actuaries into the picture, anyone who seriously advocates nuclear power should lobby to repeal the Price-Anderson Act, which requires U.S. taxpayers to shoulder the costs of a really serious accident. The fact that the industry demanded such an act suggests that they do not have confidence in their own product. If the act were repealed, we would have an idea what nuclear power really costs. As it stands, all we know is that the quoted costs are much too low.
Danger is not the only property that makes nuclear power exceptional. Even neglecting the boost in cost that would be caused by repeal of the Price-Anderson Act, the cost curve for nuclear power is remarkable. My group at the Santa Fe Institute has collected data on the cost and production of more than 100 technologies as a function of time. In contrast to all other technologies, the cost of nuclear power has roughly remained constant for 50 years, despite heavy subsidies. This cannot be blamed entirely on the cost of safety and regulation, and after Japan, is anyone really willing to say we shouldn't pay for safety? In contrast, during the same period solar power has dropped by a factor of roughly a hundred, making its current cost roughly equal to nuclear. Wind power is now significantly cheaper than nuclear. Solar will almost certainly be significantly cheaper than nuclear within a decade, roughly the time it takes to build a nuclear plant.
To properly assess systemic risks, the devil is in the details. We can't debate the risks of a technology without wading through them. But from a complex systems engineering point of view, one should beware of anything that amplifies risk. Systemic risks are difficult to predict, and precautionary principle dictates that one should take care when faced with uncertainty.
THE EDGE QUESTION BOOK SERIES
"An intellectual treasure trove"An intellectual treasure trove...Best three or four hours of intense, enlightening reading you can do for the new year""
Edge Foundation, Inc. is a nonprofit private operating foundation under Section 501(c)(3) of the Internal Revenue Code.