Curtains For Us All?

Curtains For Us All?

Martin Rees [5.31.17]
Here on Earth, I suspect that we are going to want to regulate the application of genetic modification and cyborg techniques on grounds of ethics and prudence. This links with another topic I want to come to later about the risks of new technology. If we imagine these people living as pioneers on Mars, they are out of range of any terrestrial regulation. Moreover, they've got a far higher incentive to modify themselves or their descendants to adapt to this very alien and hostile environment.                                 
 
They will use all the techniques of genetic modification, cyborg techniques, maybe even linking or downloading themselves into machines, which, fifty years from now, will be far more powerful than they are today. The posthuman era is probably not going to start here on Earth; it will be spearheaded by these communities on Mars. 
 
LORD MARTIN REES is a Fellow of Trinity College and Emeritus Professor of Cosmology and Astrophysics at the University of Cambridge. He is the UK's Astronomer Royal and a Past President of the Royal Society. Martin Rees's Edge Bio Page

CURTAINS FOR US ALL?

My main professional job has been as an astronomer and a space scientist. I've been very lucky to have been working through several decades now, but each decade has brought exciting new developments. What I'm thinking about at the moment is the very beginning of the universe. Can we understand the Big Bang right at the start, when we have only indirect evidence and when the physics is uncertain? In particular, can we understand how much bigger physical reality is than the part we can observe?

We can observe many galaxies, out to 13 billion light-years from us; however, there's no reason to think that that's all of physical reality, any more than if you're in the middle of the ocean, you don't think that the horizon you see around you is the end of the ocean. We want to know how much further reality extends beyond the domain we can see. Almost everyone thinks it goes a great deal further. It may go so far that all combinatorial options are fulfilled, that there are avatars of us far away making the right decision where we might make the wrong one. That's a possibility.

Even more interesting is that our Big Bang may not be the only one. There may be other Big Bangs, and they may give rise to universes that are governed by different physical laws. Many of the theories of fundamental physics suggest that there are many different so‑called vacuum states, and they can give rise to different laws. This leads to an idea that makes some physicists foam at the mouth but which I think has to be taken seriously: anthropic reasoning. This is the idea that perhaps what we think of as the laws of nature are just parochial bylaws, and that there are some deep underlying laws, but what we see in the part of the universe we can observe are just local manifestations. If that's the case, then they're not a typical manifestation because many of these universes will be sterile or stillborn; they will not allow complex phenomena to happen—no stars, no chemistry, no life. We're in one that does allow these complexities. 

To try to put it on a firm footing will be a big challenge for the next fifty years. I won't live to see that done. I like to quote a dialogue after a talk that I gave in a panel discussion with Andrei Linde. We were asked at the end how much we would bet on this multiverse story being correct. I said that on a scale of would you bet your goldfish, or your dog, or your life, I was nearly at the dog level. Andrei Linde, a real pioneer of the subject who's spent twenty-five years developing an idea called eternal inflation, said he'd almost bet his life on this. The great theorist Steven Weinberg said he'd happily bet Martin Rees's dog and Andrei Linde's life.

That was a few years ago, and the opinion has shifted in favor of taking this seriously. I can give you some indications of this because I hosted a conference on this subject in Cambridge in 2001. We held it in the barn of my old farmhouse at the edge of Cambridge. Five years later we had another conference, which was held in the Master's Lodge of Trinity College, where the speaker stood in front of a picture of Isaac Newton. Frank Wilczek attended both of these meetings. He gave a summary talk at the second one, saying that five years ago we were a beleaguered minority, whereas now, he and I and others had led many other people into the wilderness. Here we were in front of Newton's portrait, taking seriously the idea that our part of the universe is just a tiny and atypical fragment. This is a new Copernican revolution, which is very important.

The other thing that's exciting now is, in a sense, another Copernican revolution, but cosmically on a much smaller scale. This is the realization that our solar system is just one of zillions of planetary systems around other stars. In fact, we didn't know that these planets around other stars existed even twenty years ago. People speculated they did, but the first one was found in 1995. Now, literally thousands have been discovered. It's fairly clear that most stars are orbited by retinues of planets, just as the Sun is orbited by the Earth and the other familiar planets.

Many of these planets are very different from the Earth, but enough of them are like the Earth for us to suspect that there are literally billions of planets in our galaxy. Remember, our galaxy is one of billions in the universe with planets on which conditions were similar to the young Earth, and where life could have evolved. Of course, saying a planet is habitable isn't the same as saying it's inhabited, because we then get into biology, and biology is a much harder subject than anything in the physical world. Even if we understand a planet well enough to know that it has a geology like the young Earth, we don't know how life gets started.

In fact, we don't even understand how life got started on Earth. We understand Darwinian evolution from simple life to complex life. What we don't understand is the transition from complicated chemistry to the first replicating, metabolizing structures that we'd call alive. The good news is that this subject, which was relegated to the "too-difficult box" by most scientists and serious people who didn't work on it except as a hobby, now attracts the attention of serious people. I'm optimistic that within ten years or so, we will have an understanding of how life began on the Earth.

That will tell us two important things. It will tell us, first, how likely it was. Was it a rare fluke, or is it something that we would expect to have happened on these other planets that are rather like the Earth? The second thing that these developments would tell us is whether there's something special about the chemistry of DNA and RNA, which all terrestrial life is based on. It would also tell us if there could be other kinds of life, maybe even life that doesn't need water.

Maybe there are planets that we wouldn't think are habitable, but which are. In our solar system, for instance, there's the moon of Saturn called Titan, which is at -160 degrees C and has lakes of liquid methane. It looks as though it would be a rather nice place. If life could live based on methane and not on water, then places like that would be habitable. These are questions that we would be able to answer. But also, we might have some direct evidence for whether planets have life on them.

At the moment, we only have indirect evidence for these planets. We don't see them, but we infer them through the effect they have on their parent star. The best way to detect them is if a planet transits in front of a star. It blocks out a bit of the light from the star so the star looks slightly fainter. A signature for a planet is if a star shows regular dips each time the planet comes around. That is a method that has led to the discovery of thousands of planets. From that kind of evidence, you can infer how big the planet is from how much of the starlight it blocks out, and what the length of its year is from how frequently you see the recurrent dips.                                 

We have that kind of evidence, but we'd like to observe these planets directly and not just see their shadows. We will be able to do this with these nearest ones in about ten years using the James Webb Space Telescope and the next generation of big telescopes on the ground. One telescope that Europeans are building, which is unimaginatively called the Extremely Large Telescope—the ELT—has a mirror thirty-nine meters across. It's not one big sheet of glass, but rather a mosaic of 800 sheets of glass.

The James Webb Telescope and the ELT should be able to identify the light from a planet even though the planet is millions of times fainter than the star it's orbiting around. It will also tell us something about the atmosphere. Is there something especially green? Does it have oxygen? Is the atmosphere out of equilibrium, as Lovelock says it will be if there's life there? Those are the kinds of questions we might be able to answer within ten years. This is a very exciting development.

That kind of observation is going to tell you if there's some kind of life. What most laypeople want to know is whether there is any advanced life, intelligent life. That's a completely different question and more speculative. Even though we understand how on Earth life evolved over 4 billion years from simple proto-organisms to the biosphere that we see around us and of which we are a part, we don't know the extent to which that was inevitable and the extent to which there were contingencies.

In fact, this is a big debate among biologists. Stephen Jay Gould thought there were lots of contingencies. He thought if you were to rerun evolution, and if the dinosaurs hadn't been wiped out, then you might end up with a different biosphere with no intelligent life. Ernst Mayr thought the same thing. Others somehow feel that the evolution of life is going to be rather like what happened on Earth, that something will emerge with intelligence. Even though we're completely uncertain, it's such a fascinating question that it's worth using every possible effort to see if we can find evidence for something artificial—something beeping, some apparent artifact or something that could not be natural. That's why I'm very keen to support Yuri Milner, a Russian investor who has put a substantial sum of money, $10 million a year for ten years, into the search for extraterrestrial intelligence. This will allow a much deeper search than has been done in the past.

One day of what this search would be able to do would be equivalent, in terms of depth, to all that's been done up until now by earlier searches going back to Frank Drake, Carl Sagan, and other great pioneers. It's worth a try. We don't know what to look for, but it's worth looking for any evidence for something that's artificial—a narrowband signal, something that is beeping in a curious way, et cetera.

What do we expect to find? My personal view is that if we find something, it is not going to be the sort of civilization that people talk about.

If we think of what's happened on Earth, there's been 4 billion years of evolution. And for a few millennia, there's been some kind of civilization—organized human groups—leading eventually to technology and the world we live in today. If we extrapolate, then of course the extrapolation we get depends on whether we listen to someone like Ray Kurzweil or someone more conservative.

Even though the rate of progress is uncertain, the direction of travel is pretty well agreed. It's almost certainly going to be towards a posthuman world, where our intelligences would be surpassed by something genetically engineered from us or, more likely, it will be some sort of artificial electronic device that has robotic abilities and intelligence.

Some people say that will happen within a century, others say it will happen within a few hundred years. Even if it takes a few hundred years, that is a tiny instant compared to the past history of the Earth. More importantly, it's a tiny instant compared to a long-range future. There are billions of years ahead for our solar system, and maybe even more for the universe.

If you imagine a time chart for what's happened on the Earth, there's been 4 billion years where there's been no manifestation of any technology. Then, a few millennia of gradually expanding technology generated by human beings. After that, maybe there will be billions of years more when the dominant technology, the dominant non-natural things, will be entirely inorganic. That means the following: If we were to detect some other planet on which life had taken a course similar to what happened here on Earth, it's unlikely that its development there would be sufficiently synchronized with development here that we would catch it in those few millennia in which we've got technology that is controlled by organic beings like us. If it's lagging behind what's happened on Earth, then we'll see no evidence for anything artificial.

On the other hand, if it's ahead, then what we will detect—if we detect any evidence that that civilization existed—will be something mechanical, machines. Those machines maybe will not be on the planet because they may not want gravity, they may not want water, et cetera. They may be in space. If the Yuri Milner program detects anything, then it's likely to be some artifact created by some long-dead civilization. It's unlikely that there would be any coded message intended for us, but it might be something we could clearly see was not something that emerged naturally. That in itself would be very exciting.

To expand on what's going to happen here on Earth that might lead to this takeover by posthumans in some form leads to another fascinating topic: the future of manned spaceflight. This is another of the things I've been fascinated by since I was young. I remember reading about Neil Armstrong and thinking at that time that it would only be ten more years before there were human footsteps on Mars.

The Apollo program was fueled by superpower rivalry, and when the Americans had beaten the Russians to the moon, they cut back expenditure. NASA joined the Apollo program, spending more than 4 percent of the US federal budget. It's now down to 0.6 percent, so it'd have to be a real step change if they were to ever go back and do something that trumped what the Apollo program had done. 

What we've seen in the last forty years has been humans just going around a low Earth orbit, but there have been huge developments in miniaturization and in robotic probes. Think of the pictures we had sent back from Pluto the year before last from NASA's New Horizons. Pluto is 10,000 times further away than the moon is, and we had these very clear pictures of it. What was remarkable about those pictures was that they were based on 1990s technology. The space probe had taken ten years to get to Pluto, and the design has to be frozen several years before launch.

If you think how smartphones have evolved in the last ten or fifteen years, one knows how much better we can do now in sending miniaturized probes throughout the solar system. That's what we'll do for science. I still hope that people will go, but I don't think they will go in the style of NASA astronauts. They will go more in the mode being envisaged by Elon Musk's SpaceX and these other private pioneers. They will be adventurers prepared to accept high risks.

NASA's manned program is so expensive because it's so risk-averse. The shuttles failed twice in nearly 140 launches. Each of those failures was a big national trauma because it'd been sold as something that was safe and routine. Test pilots are happy to accept far more than a 2 percent risk. There are many adventurers who will be prepared, too—people like the guy who fell supersonic from a helium balloon, or the British adventurer Ranulph Fiennes, who, at the age of seventy, dragged a sledge across Antarctica in the winter. They're the kinds of people who will be the first colonizers on Mars.

I don't think Elon Musk is realistic when he imagines sending people a hundred at a time for normal life because Mars is going to be far less clement than living at the South Pole, and not many people want to do that. I don't think there will be many ordinary people who want to go, but there will be some crazy pioneers who will want to go, even if they have one-way tickets.

The reason that's important is the following: Here on Earth, I suspect that we are going to want to regulate the application of genetic modification and cyborg techniques on grounds of ethics and prudence. This links with another topic I want to come to later about the risks of new technology. If we imagine these people living as pioneers on Mars, they are out of range of any terrestrial regulation. Moreover, they've got a far higher incentive to modify themselves or their descendants to adapt to this very alien and hostile environment.

They will use all the techniques of genetic modification, cyborg techniques, maybe even linking or downloading themselves into machines, which, fifty years from now, will be far more powerful than they are today. The posthuman era is probably not going to start here on Earth; it will be spearheaded by these communities on Mars.  

These are the exciting developments in space. I don't think that people will go for any practical purpose. Robots are becoming more sophisticated, and all the science can be done by robots just as well as by human beings in the future. Humans will only go as a spectator sport or as adventure.

The other thing that will happen is that by the second half of the century, there will be huge fabricators up in space that are able to assemble huge telescope dishes, solar energy collectors, things like that, maybe mining material from the moon or from asteroids. That is something that can run away, because we get bigger and bigger machines making larger and larger artifacts. That will happen. It may be controlled from the Earth, but it won't need humans up there because it can be done by machines. This will be an entirely new technology that could be important for us on Earth—as a way of getting clean energy more efficiently, for instance.


AI and generalized machine learning are topics where I'm a follower. I'm in no sense an expert, but it's clear that they are surging ahead very fast now. Some of the key ideas developed in the 1980s and 1990s by Geoff Hinton. They've only been realizable because of the greater processing power of modern computers. The learning that is done by these methods requires analyzing a huge amount of data.

For instance, the Google translation algorithms, which are very effective now, are done not by feeding in detailed information, but by letting the machine read billions of pages of documents. They use European Union, EU documents. They never get bored, so they read those in different languages, and eventually they work out for themselves the syntax of different languages. The machines learn by crunching huge amounts of data—many, many books and pages. They learn to recognize cats and dogs by looking at literally millions of pictures. They can learn for themselves without being programmed. 

That's a big difference between the DeepMind computer, which played Go last year and beat the world champion, and the IBM Deep Blue computer, which twenty years earlier had beaten Kasparov, the world chess champion. The chess-playing computer was programmed in detail by experts, whereas the Go-playing computer was not. It learned itself by watching and analyzing many games, and playing against itself. Eventually, it managed to make moves that puzzled even expert players but which proved to be excellent moves. Now, a computer is able to play poker very well. That, again, is something that it has learned itself.

There is this important development in generalized machine learning, which enables the machines to learn without being programmed in detail. This is an important breakthrough. We should rightly acclaim it. We can expect very rapid progress.

People sometimes say, "If you look at the history of AI, there've been these false dawns. There was one in the 1960s, and then there was another one in the 1990s, and now there's another one." This time, it is different. The reason it is different is that it has gotten above the threshold when there's commercial interest and lots of money being thrown at it. In the past, it was just done by a few academics.

In this country the field was completely killed off when a scientist called James Lighthill wrote a report saying it would never work, which stopped all the funding. Now, it's clear that there are lots of major commercial sponsors for AI, which means it's not going to die. It will develop fast. This time it is different.

I am fortunate to have gotten to know some of the people in this field in which I'm not an expert, particularly the people at Google DeepMind, which is a company based in London. They are very keen to interact with academia, and also blur the boundary between academia and commercial work in two respects. First, they try to publish as much as they can in the open literature. They've had papers in Nature, for instance, reporting on some of their breakthroughs like the Go-playing computer. They also encourage the young people who work there to publish. Obviously, there are some things that are commercially confidential, but they try to be as open as possible.

The other feature of some of these groups doing AI is that they realize things are developing so fast that it's not premature to think about the need for regulation and guided responsible innovation to make sure that things don't go badly wrong. In this respect, it's rather like biology. The people who work on gene editing have accepted for a long time that you need to have and enforce some kind of regulation.

The people in AI feel we have to think about the same thing to try to ensure that the programs don't get loose in the Internet of Things, and also to try to think about the order in which we would like things to happen. If you think about future risks, it depends on whether A happens before B, or A happens after B. You want to try and ensure that things are done in the best order. Of course, it's going to be hard to enforce this when there are many commercial pressures involved. So far, there has been a very interesting dialogue between those in academia and those in the commercial world.

That's not just on the ethics, but also on the science. The other interest is what's exemplified best by Dan Dennett, for instance, which is the nature of artificial intelligence and the extent to which it is like human intelligence, and the extent to which consciousness is part of it. This is a classic philosophical problem, and this is one context where philosophers can provide a perspective. Most of the techies and geeks come to these problems afresh without knowing the history of these debates about what is meant by the brain and dualism.

The AI people not only need to engage with those concerned with safety and ensuring that the regulation is appropriate, but also the deep philosophical questions about what the limits of AI are, and what will change if we have quantum computers, and to what extent these are going to be conscious beings. Clearly, they will have more and more human capabilities. And this raises the philosophical question of whether consciousness is an emergent property of any sufficiently complicated system, or whether it's something that is special to the wet hardware in our skulls and the fact that we are linked to a body.

This is a very old question, and it's still an important one. Of course, there are implications for how we should treat these robots when they're seemingly intelligent, and what responsibility we have towards them. Most of us feel we have a responsibility to ensure that other human beings, and even some animal species, can exploit their potential, and that they're given the opportunities that they need. Are we going to have to start worrying about whether robots are underemployed or bored, as we would if they were things with a consciousness? We just don't know if we're going to have to do that.

That's a very deep question, much like the old question of how do I know you're not a zombie. I only know by analogy you're not a zombie, and that's going to be true for the AI as well. It's good that there are some philosophers who take this seriously. Dan Dennett is one. Another is my colleague Huw Price, in Cambridge. He is a philosopher of science who has also taken interest. As an outgrowth of our Centre for Existential Risks, we are starting a new Centre for the Future of Intelligence, which is going to try to address questions of artificial intelligence and brain science from a philosophical and an ethical perspective.

There are some people who carry a trade union card as philosophers who are starting to take this seriously, rather than those who have a purely humanistic background. We've got to ensure that the public does know what's going on, and that there is some regulation. There's also the issue about privacy and who has access to your personal data. That's coming up in this country with medical records and whether they can be anonymized adequately. Ross Anderson talks about this. You recently published an Edge feature on him. This is a very serious issue in this country and elsewhere. We do have to address these issues.

We've got this new center in Cambridge, and we're trying to do something that is not done sufficiently. There are huge numbers of people thinking about conventional risks—carcinogens in food and low radiation doses—whereas these high‑consequence, low‑probability risks that are coming upon us because of technological advances and the greater interconnectedness of our world are not being studied that much. That's why I and a few other people felt we should try to do a bit towards this. Even if we can only reduce the risk of one catastrophe by one part in 1000, the stakes are so high we'll have more than earned our keep by doing that.

It goes back to the book I wrote about thirteen years ago now called Our Final Century—in the US it was called Our Final Hour—which did address some of these concerns. There were two types of concerns that I addressed. One was the collective effect that we are having as a species because of the heavier footprint we are making on the planet. There are more of us demanding resources, and we are causing changes to the atmosphere and the climate. There's a risk of an environmental tipping point. That's one thing.

That is fairly well appreciated now, although it's under acted upon for obvious reasons. The issues are long-term and diffuse, and politicians focus on what is local and immediate. There's a second class of threats, which was the most distinctive thing I highlighted in that book and which has stood the test of time quite well, and that is that we're getting more vulnerable because the world is more interconnected. Small groups or even individuals are more empowered by technology. As I put it: "The global village will have its village idiots, and they will have a global range." This is because they are empowered by technology.

We see this in cyberattacks, which can be done by just one geek and which have an international effect. This is getting more serious. We are all aware of that. There will be other threats as artificial intelligence gets more powerful and more diffuse. When we have the Internet of Things, we'll all be far more vulnerable. There will be an arms race between those who are trying to prevent these attacks and those who are ever more ingenious in doing them. That's one threat: AI and cyber.

There's also the bio issue, which I find very scary. I'm not a biologist, but I talk to them. It's scary because of the very rapid developments, and because the techniques that are needed are small-scale and dual-use, available in many university and industrial labs. It's not like making a nuclear weapon, which needs conspicuous special-purpose facilities. This does worry me.

Some people say that these techniques are more powerful and that we just need more regulation. They quote the example of the famous Asilomar Conference in the 1970s, when the leaders in molecular biology got together to discuss the then-new techniques of recombinant DNA. They discussed whether there were types of experiments on which they should impose a moratorium. They came to an agreement and were, if anything, overcautious, but they managed to enforce what guidelines they felt were appropriate.

There have been similar groups convened by academies—the Royal Society and the International Academy of Science—involving some of the same people, like David Baltimore, who were involved in the old Asilomar Conference. The purpose is to try to address the risks from new techniques such as CRISPR-Cas9. There's also a new technique called gain-of-function, where people have shown that you can make the influenza virus, for instance, more virulent or transmissible.

Such experiments were done in Wisconsin and in Holland in 2012. The US federal government stopped funding them in 2014 because they seemed potentially dangerous. These new techniques, gain-of-function and CRISPR-Cas9, are clearly powerful and they have downsides, both ethically and prudential. Therefore, everyone agrees that they need regulation.

The difference between the present state and the old Asilomar Conference in the 1970s is that now the community doing these experiments is far more global. Also, there are far more commercial pressures. What makes me very depressed and very anxious is that, even if we have these guidelines globally, I don't believe they can be effectively enforced globally, any more than the drug laws or the tax laws can be enforced globally. I worry that whatever can be done will be done somewhere by someone, and we can't stop it. We can obviously do what we can to minimize the risk, but we can't stop it. We need to think about what precautions we can take.

Of course, in order to reassure people like me, people say that biological weapons haven't been used by governments because their effects are uncontrollable. If you are a government with a well-defined aim, or even a terrorist group with a well-defined aim, then you wouldn't want to release some biological pathogen because you don't know who it's going to kill. That may be true, but for just that reason my worst nightmare is an ecology fanatic with the mindset of some of the extreme animal rights people we have in this country, someone who thinks that the world—Gaia—is being polluted or destroyed by too many human beings.

There are many people who think that, but if there's one person who thought that and had this kind of mindset, then they might think it a good idea to try to kill off as many human beings as they can. They wouldn't care who it was. Obviously, this is unlikely. You'd need to have someone with this extreme psychology, but the point is that one such person is too many because the downside could be so colossal. That is number one on my list of not entirely unrealistic scares.

If you look further ahead, then we have to ask how far these techniques will go in allowing you to design new species. That's why groups like we have in Cambridge are well set up to use our convening power to get some of the world's best biologists together to brainstorm, to figure out where the boundary is between what is pure science fiction and what might actually happen. They won't be right, but they're more likely to be right if they're working at the frontier than a random person.

That's why academic groups like ours and a few similar ones around the world can be helpful in trying to decide what concerns we should focus on and what can we do to minimize those risks. We should do something, but I'm pessimistic. These techniques in biology are widely disseminated, and biohacking is a student sport almost. It's going to be very possible to do these sorts of things.

Freeman Dyson, in one of his articles in the New York Review of Books, speculates that the next generation of kids may be able design new species, just as he had a chemistry set and made new chemicals. I hope that is science fiction, but if that not, it may be curtains for us all. If you try to mess up the ecology, then that could be dangerous.

Even in the quite short term there are issues being discussed related to the gene-modification technique called gene drive, where it's possible to affect the fertility of a particular species and wipe it out. This has been used in a seemingly benign way to try to kill off the mosquito species that carries the Zika virus. If that works, that's fine, but then people are saying, "This should be used to kill the gray squirrels in England, which are dominating the brown squirrels. Everyone likes the brown squirrels much better, so let's kill off all the gray squirrels."

That may be feasible, but when you mess with ecology in that way, there's a nonzero risk of things getting out of control or, certainly, unintended consequences. Given the power of these new techniques and the fact that they are going to be usable by literally millions of people with modest equipment, we are going to have a bumpy ride, and a growing tension between privacy, security, and liberty if you want to try to minimize these risks.

These are serious worries, and despite the exciting developments, I do worry about how we are going to cope. Also, I worry that society is fragile as well as being interconnected. Let me give you another example of this. Quite apart from biological weapons, natural pandemics can emerge. We can try to do what we can to preempt them. If you want to stop a pandemic spreading, you've got to make sure a Vietnamese farmer, for instance, notices any strange disease in his pigs or his hens. You've got to try to minimize that.

If a pandemic spreads, then it will have catastrophic consequences, especially because of the social consequences. If you think back to earlier pandemics like the Black Death or even the influenza epidemic in 1919, which killed many millions, it didn't cause a breakdown of society. Whereas now, if we had any kind of pandemic in the UK or the US, then once the number of cases exceeded the capacity of hospitals to cope, then there would be a real risk of social breakdown. People would know the treatment was available, but they're not getting it.

That sort of pressure and potential breakdown would happen even if only one person in 1000 got infected, just thinking what the capacity of hospitals is. In that sense, our society is more vulnerable to a pandemic.

There's an interesting sociological question, which is that if there is a pandemic in, say, Mumbai or Lagos, one of these megacities, then it would be terrible, but does the fact that most people have mobile phones make it better or even worse? This is an important sociological question. You could say it makes it better because advice can be disseminated, but on the other hand, it allows panic and rumor to spread more rapidly.

It's not at all clear what the balance is between those two conflicting effects. That's just an example of how, in order to minimize the downside of these risks, which we can't reduce to zero, we need to think about the social science impact. Of course, there are many things we can do to minimize the risks even if we can't eliminate them. We can at least make people conscious.

The other thing that is very much at the forefront of my mind, being based in university, is to try to promote long-term thinking. When we are interacting with people who are twenty, they may be alive at the end of the century, so they are going to be more concerned about what may happen towards the end of the century. The technology is not very predictable, but environmental changes and climate change, which are not going to be important on a timescale of a decade, is going to more important then, so they are more concerned with these things.

It's important to sensitize the younger generation, and indeed the wider public, to these issues. Although many politicians, certainly in this country, are very aware about these long-term threats—the effects of climate change, the risk to biodiversity—they aren't going to prioritize dealing with these unless there's public demand. Even if a science advisor to a minister makes the point—we've had good science advisors in this country, and the US had John Holdren—they won't be acted upon unless there's continued public pressure.

If scientists are lucky, they can be a scientific advisor. But those advisors have a limited impact on political leaders. What politicians care about far more is what's in their inbox and what's in the press. In a way, the scientists who will have the biggest impact are probably those who go public and become media figures, like Carl Sagan. They influence a wide public, and that public then does bang on and feed into the press and the politicians.

It's important that we should, on all these long-term issues, try to ensure that the public is engaged. Of course, those in academia could start with our students, but we want to go wider than that. Incidentally, there's one example where even the Catholic Church has had a positive effect. The Pope produced an encyclical in the summer of 2015, where he raised concerns about the risks to biodiversity and to climate from the heavier footprint of the rising human population. He said for the first time to his flock that humans have a duty to the rest of creation. He didn't take the old attitude that man has dominion over nature. He said something that only Franciscans had said clearly in the past, which was that people did have an obligation to nature.

Whatever one thinks of the Catholic Church, one can't contest it's got global range, a long-term vision, and concern for the world's poor. That statement, which spread around the world in the summer of 2015, and for which the Pope got a five-minute standing ovation at the UN, was an important input to the Paris conference on climate change in December 2015. Of all those climate conferences, that was the one where at least there was a consensus. It's limited in its long-term impact, but that was an example where the public impact in Latin America, East Asia, and Africa did have a big effect on making a consensus easier to forge in that context. 

If it involves any sort of denial of what they want, it's going to be very hard to get people to cut down on CO2 emissions because the effect is so diffuse and long-term. That's why I've been very much behind the campaign to increase the level of research and development into all kinds of carbon-free energy. Another outcome of the Paris conference, initiated by a British group but led by America and India together, was to persuade more than twenty countries to double their level of R&D into all kinds of clean energy generation, and into the ancillary technologies of batteries and DC grids.

The motive for that is exciting. It's hard to think of a more inspiring goal for young engineers than to provide clean and cheap energy for the developing world. This is going to be now funded at a higher level by governments and, incidentally, Bill Gates and a group of private philanthropists who have joined and they said they will spend more on this. If this succeeds, it will speed up the availability of clean non-carbon energy generation at an economical price. As things develop, the costs come down. The sooner the costs come down, the more realistic it will be for Indians, who clearly need to have more electric power so they don't depend on stoves burning wood and dung, easier for them to leapfrog directly to clean energy and not build coal-fired power stations.

The most feasible way to deal with climate change and CO2 in the long run is to accelerate as much as possible the development of carbon-free energy so that it comes down in cost and everyone will prefer it to fossil fuels.

We can't predict which future technologies will come up and surprise us just as much as the iPhone did in the past. Nor can we predict which will be taken up. There are cases where technologies allow things to happen, but there's no demand. Take, for example, supersonic airliners. Fifty years ago, some people might have thought that we'd all be flying supersonically, whereas we're flying more or less the same way as we were fifty years ago, for reasons we can understand. There was no economic pressure for supersonic flight.

Manned spaceflight is another example. Governments funded it in the past, but they don't now. It's not the case that all technologies are developed, but some are, and some are developed and run away at a huge rate. Those who control those runaway technologies are in a very powerful position. How we're going to cope with that is something that is a big challenge for all of us.