Those of us fortunate enough to live in the developed world fret too much about minor hazards of everyday life: improbable air crashes, carcinogens in food, and so forth. But we are less secure than we think. We should worry far more about scenarios that have thankfully not yet happened—but which, if they occurred, could cause such world-wide devastation that even once would be too often.
Much has been written about possible ecological shocks triggered by the collective impact of a growing and more demanding world population on the biosphere, and about the social and political tensions stemming from scarcity of resources or climate change. But even more worrying are the downsides of powerful new technologies: cyber-, bio-, and nano-. We're entering an era when a few individuals could, via error or terror, trigger a societal breakdown with such extreme suddenness that palliative government actions would be overwhelmed.
Some would dismiss these concerns as an exaggerated Jeremiad: after all, human societies have survived for millennia, despite storms, earthquakes and pestilence. But these human-induced threats are different: they are newly emergent, so we have a limited timebase for exposure to them and can't be so sanguine that we would survive them for long – nor about the ability of governments to cope if disaster strikes. And of course we have zero grounds for confidence that we can survive the worst that even more powerful future technologies could do.
The 'anthropocene' era, when the main global threats come from humans and not from nature, began with the mass deployment of thermonuclear weapons. Throughout the Cold War, there were several occasions when the superpowers could have stumbled toward nuclear Armageddon through muddle or miscalculation. Those who lived anxiously through the Cuba crisis would have been not merely anxious but paralytically scared had they realized just how close the world then was to catastrophe. Only later did we learn that President Kennedy assessed the odds of nuclear war, at one stage, as "somewhere between one in three and even." And only when he was long-retired did Robert MacNamara state frankly that "[w]e came within a hairbreadth of nuclear war without realizing it. It's no credit to us that we escaped—Krushchev and Kennedy were lucky as well as wise".
It is now conventionally asserted that nuclear deterrence worked—in a sense, it did. But that doesn't mean it was a wise policy. If you play Russian roulette with one or two bullets in the barrel, you are more likely to survive than not, but the stakes would need to be astonishing high—or the value you place on your life inordinately low—for this to seem a wise gamble.
But we were dragooned into just such a gamble throughout the Cold War era. It would be interesting to know what level of risk other leaders thought they were exposing us to, and what odds most European citizens would have accepted, if they'd been asked to give informed consent. For my part, I would not have chosen to risk a one in three—or even one in six—chance of a disaster that would have killed hundreds of millions and shattered the physical fabric of all our cities, even if the alternative were a certainty of a Soviet invasion of Western Europe. And of course the devastating consequences of thermonuclear war would have spread far beyond the countries that faced a direct threat.
The threat of global annihilation involving tens of thousands of H-bombs is thankfully in abeyance—even though there is now more reason to worry that smaller nuclear arsenals might be used in a regional context, or even by terrorists. But when we recall the geopolitical convulsions of the last century—two world wars, the rise and fall of the Soviet Union, and so forth—we can't rule out, later in the present century, a drastic global realignment leading to a standoff between new superpowers. So a new generation may face its own "Cuba"—and one that could be handled less well or less luckily than the Cuba crisis was.
We will always have to worry about thermonuclear weapons. But a new trigger for societal breakdown will be the environmental stresses consequent on climate change. Many still hope that our civilisation can segue towards a low-carbon future without trauma and disaster. My pessimistic guess, however, is that global annual CO2 emissions won't be turned around in the next 20 years. But by then we'll know—perhaps from advanced computer modelling, but also from how much global temperatures have actually risen by then—whether or not the feedback from water vapour and clouds strongly amplifies the effect of CO2 itself in creating a 'greenhouse effect'.
If these feedbacks are indeed important, and the world consequently seems on a rapidly-warming trajectory because international efforts to reduce emission haven't been successful, there may be a pressure for 'panic measures'. These would have to involve a 'plan B'—being fatalistic about continuing dependence on fossil fuels, but combating its effects by some form of geoengineering.
That would be a political nightmare: not all nations would want to adjust the thermostat the same way, and the science would still not be reliable enough to predict what would actually happen. Even worse, techniques such as injecting dust into the stratosphere, or 'seeding' the oceans, may become cheap enough that plutocratic individuals could finance and implement them. This is a recipe for dangerous and possible 'runaway' unintended consequence, especially if some want a warmer Arctic whereas others want to avoid further warming of the land at lower latitudes.
Nuclear weapons are the worst downside of 20th century science. But there are novel concerns stemming from the impact of fast-developing 21st century technologies. Our interconnected world depends on elaborate networks: electric power grids, air traffic control, international finance, just-in-time delivery and so forth. Unless these are highly resilient, their manifest benefits could be outweighed by catastrophic (albeit rare) breakdowns cascading through the system.
Moreover a contagion of social and economic breakdown would spread worldwide via computer networks and 'digital wildfire'—literally at the speed of light. The threat is terror as well as error. Concern about cyber-attack, by criminals or by hostile nations, is rising sharply. Synthetic biology, likewise, offers huge potential for medicine and agriculture—but it could facilitate bioterror.
It is hard to make a clandestine H-bomb, but millions will have the capability and resources to mis-use these 'dual use' technologies. Freeman Dyson looks towards an era when children can design and create new organisms just as routinely as he, when young, played with a chemistry set. Were this to happen, our ecology (and even our species) would surely not survive unscathed for long. And should we worry about another SF scenario—that a network of computers could develop a mind of its own and threaten us all?
In a media landscape oversaturated with sensational science stories, "end of the world" Hollywood productions, and Mayan apocalypse warnings, it may be hard to persuade the wide public that there are indeed things to worry about that could arise as unexpectedly as the 2008 financial crisis, and have far greater impact. I'm worried that by 2050 desperate efforts to minimize or cope with a cluster of risks with low probability but catastrophic conseqences may dominate the political agenda.