The Science of Intelligence

The Science of Intelligence

Max Tegmark [2.13.18]

We need to shift away from just the engineering of intelligence, where we hack something so it kind of works most of the time, to the science of intelligence, where we can prove rigorous things about this. We put a lot of effort into raising our human children so that they should adopt our values and retain them, and we need to put a commensurate effort also into raising our machines. 

MAX TEGMARK, a Swedish-American cosmologist, is a professor of physics at MIT. He is the scientific director of the Foundational Questions Institute and the president of the Future of Life Institute. He is the author of Our Mathematical Universe and Life 3.0: Being Human in the Age of Artificial Intelligence. Max Tegmark's Edge Bio page


THE SCIENCE OF INTELLIGENCE

I’m very excited about artificial intelligence because everything I love about civilization is the product of intelligence. I believe that if we can amplify our intelligence with artificial intelligence, we have the opportunity to solve our thorniest problems and create a really inspiring future. For us to be able to do this, however, we have to be able to trust these AI systems.

We have to transform today’s buggy and hackable computers into robust AI systems that we trust. What is trust anyway? Trust comes from understanding, in my opinion. And we don’t particularly understand how today’s deep learning systems do what they do. What we have today is engineering of intelligence. What we need is the science of intelligence, where we, from quite basic principles, have a deep understanding of what happens so that we can rigorously prove, for example, that outcome X is never going to happen and outcome Y is guaranteed to happen.

Ironically, the recent successes of AI have taken us farther from this trust. If you go back a couple decades when IBM’s Deep Blue beat Garry Kasparov in chess, it was able to do this because humans had programmed in an algorithm that they understood very well for how to play chess. It won by being able to think faster and remember more than Gary Kasparov’s human brain. Today, most of the successes are driven instead by this paradigm, where we have a simulated artificial neural network that learns a little bit more like a child. You just train it on vast amounts of data, and after a while it’s able to do all these great things, but its creators have little idea how.

If you raise a human child, you have no idea how exactly it learns to speak English. Sometimes, we find that some people aren’t that trustworthy because we don’t understand how they think.

Since we don’t know how our human brains work, there is not a single person on earth that I would happily put in charge as a supreme dictator of the planet. There’s nobody I trust that much. Power corrupts; absolute power corrupts absolutely. That’s why we created democracy with a balance of power. Today, when your laptop crashes, it’s annoying, but if what crashes is the AI system that’s controlling your self-flying airplane, or your nuclear power plant, or the US nuclear arsenal, annoying isn’t going to be the word that comes to your mind.

We need this whole other level of trust before we can put computers in charge of our physical infrastructure, of our world. I’m optimistic that we can make a lot of progress there if we tackle AI not just with this engineering mentality of hacking it until it kind of works, but with a scientific attitude of understanding the physics of intelligence. What does it mean for a blob of quarks and electrons to remember to compute, to learn, to experience?

Being a physicist, I feel that the essence of physics is to have the audacity to look for hidden simplicity where most of our colleagues tell us we’re wasting our time.

If you give a physicist a coffee cup with cream and begin to stir it, someone else might have said that you can never understand all these waves and vortices without understanding everything that the coffee is made of, and understanding molecules, atoms, quarks, electrons, and maybe even string theory. That turned out to be rubbish. It turned out that you could understand the waves with the wave equation and the Navier-Stokes equation before we even knew what they were waves in.

I am optimistic that intelligence is a lot like that also, that there are a lot of very fundamental things about computation and learning that can be understood in a substrate independent way, where it doesn’t matter so much if the computation is done on carbon based neurons or on other types of architectures. I have neuroscience colleagues, whom I greatly respect, who tell me we’re never going to understand how the human intelligence works until we’ve understood why there are dozens of different kinds of neurons. And even though I respect the weight of what they do, I think they are wrong.

We have managed to build fantastically powerful AI systems that do certain tasks at the human level or better, using only one kind of neuron, an incredibly oversimplified neuron (compared to what we have in our brains). That suggests to me that what really matters for the intelligence isn’t to have a lot of detailed complexity in the neurons themselves, but it’s the pattern that matters, and how they’re all put together. My hunch is that the reason our brain is such a mess, compared to the clean and simple mathematical neural networks that we use in our research, is simply because Darwinian evolution optimized for something very different than the human engineer does.

A human engineer cares about simplicity. Evolution doesn’t care as long as it works. But evolution cares a great deal about the thing being able to self-assemble, self-repair, which is rather irrelevant to a human engineer. Just as the first flying machines ended up not being mechanical birds—they ended up being something much simpler—I have a feeling that the first superhuman AI we’re going to see is not going to come from understanding exactly how our messy biological brains work, but rather from understanding the basic scientific principals and doing it in a simpler way. To me this is not just something that makes me think AI is going to come sooner than an understanding of our brains at the human level, but it is also something that makes me optimistic that we can actually understand it better.

We understand how airplanes fly a lot better than we understand how butterflies fly because of the simplicity—there are foundational principles there. There’s real hope that we can understand a lot about the fundamental nature of computational systems that can give us an understanding, which in turn can let us build systems that we trust. Not trust because the marketing department of the company that shipped it tells us that we should trust it, but something we can trust because we have mathematical proof.

I was recently invited to a Nobel symposium where many highly intelligent experts on artificial intelligence were asked to define intelligence. And they all totally disagreed with each other, which proves there is no well-accepted definition of that word. I’m going to tell you my definition. I define intelligence simply as the ability of a system to accomplish complex goals. What does that mean? Let me unpack it a little bit. It first of all means that it’s ridiculous to think that each system has a single number that quantifies its intelligence, like an IQ. We can see how silly this would be by making a comparison with physical abilities, sports.

Suppose I say that there’s a single number, the AQ—the athletic quotient—which measures how good somebody is at sports, and whichever athlete has the highest AQ in the Olympics is going to win the gold medal in all the sports, from running to fencing. It’s very much the same with intelligence. If you look at machines today, what you find is that they tend to have very narrow intelligence. A pocket calculator is much better than I am at the goal of multiplying numbers fast. There are systems that are much better than me at driving a car right now.

On the other hand, humans, even though we’re inferior to machines in a lot of narrow domains, have very broad intelligence, where a human child can learn almost anything reasonably well, given enough time. The holy grail of AI research, going all the way back to the beginning, has been to have a machine that can get artificial general intelligence, where it becomes better than humans at accomplishing not just one narrow goal, but all intellectual goals that you give to it.

Even though this is something that most AI researchers view as feasible, a lot of other colleagues of mine, who are very smart people, tend to dismiss this idea of truly superhuman AI as science fiction. The problem is that we traditionally associate intelligence with something mysterious that could only exist in biological organisms, especially people. Whereas, from my perspective as a physicist, intelligence is simply a certain kind of information processing done by elementary particles moving around according to the laws of physics. There’s absolutely no law in physics that says you can’t put together particles to be more intelligent than me in all ways. I resent this carbon chauvinism that says that they have to be made of cells and carbon atoms to be able to do these things. We should become more humble about this, and open to the fact that, yes, it’s perfectly plausible to do this also in machines.

This offers great opportunities obviously to overcome our own limitations. If we can pull it off, then it’s going to be the most powerful technology ever invented. This is an old idea. It’s been mentioned by Wiener, by Turing, by Irving J. Good in the 60s, namely that if we let go of our human hubris and accept that you can have some machine that can do all intellectual tasks that we can better than us, then it’s also going to be better than us at designing AI systems and inventing other technology.

And as long as you can build these things cheaply so that it costs much less to operate them than to pay my salary as a scientist, then from then on most inventions will be made by machines and not by people. A lot of things we thought might take centuries, or millennia, or millions of years for us to figure out in terms of scientific answers or technology development might actually be much closer to us in time.

It’s an exciting opportunity as long as we get it right. I get a lot of people telling me not to ever talk about possible risks because that’s just luddite scaremongering. To me though, talking about risks with AI is not luddite scaremongering, it’s just safety engineering. We simply need to think through things that could go wrong, just to make sure we make it all go right. And that’s exactly what we do in engineering when we build any other kind of product. The more power the product has, the more valuable it is to do this.

My slogan for the work we’re focusing on in my research lab at MIT is intelligible intelligence, figuring out how you can take an AI system that does something clever, and then through some automatic method, transform it into a system that does more or less equally good in a way that you can actually understand. I feel this is something we’d like to have to build trust in systems. It’s completely unacceptable to have the pathetic level of cybersecurity and frequency of bugs that keep crashing our laptops in charge of ever more of our infrastructure in the world.

Now, in addition to the short-term challenges of bugs, it’s important to remember that whenever things get hacked, that tends to happen because of some aspect of the system we didn’t understand. It wasn’t behaving the way we thought it was, some hackers figured this out, and that’s how they took it over. If we can’t solve this problem, ultimately, all the wonderful technology we build with AI can be turned against us.

Another research question I find fascinating in terms of making AI systems you can trust is the question of goal alignment. How can you build a machine that can understand our goals, adopt our goals, and retain them as it gets ever smarter? If you tell your future self-driving taxi to take you to the airport as fast as possible, and you get there covered in vomit and chased by police helicopters, and you’re like "No, no, no, that’s not what I asked for!" and it replies, "Yes, that is exactly what you asked for," then you appreciate how hard it is for a machine to understand your goals. Being a machine, it lacks all that additional understanding that a human taxi driver would have.

Moreover, even if machines understand what we want, that doesn’t mean you’ve solved the problem of making them adopt your goals. For example, parents know how hard it is to get children to adopt their goals, even though they know perfectly well what parents would like them to do. At first the human child is not intelligent enough to understand what we want and what our goals are. Eventually, they are so smart that maybe they don’t want to adopt our goals.

Fortunately, we have this magical window when they are smart enough to understand it, but they're still kind of at our level so we can persuade them to do what we want. If we have a machine-learning system, we have to try to catch it in that window as well, though it can be much shorter for a machine if it’s improving itself much faster than a human child. I’m optimistic that it can be done. There are a lot of promising ideas for it.

This is a technical question that needs to be tackled by nerdy scientific types who are willing to get into details. And it’s a hard problem, too. Even if it's going to take thirty years to figure it out, we should start it now so we have the answers by the time we need them, not the night before some nerds turn on the superintelligence.

Finally, even if you get a machine to understand our goals and adopt them, which will be very useful in the short term, say, if someone has a helper robot at home, isn’t it great if it can figure out what that person values without needing any explicit programming knowledge? Even if we can do that, how do we ensure that the machine keeps retaining our goals if it’s able to self-improve? Children become much less excited about Legos than when they were toddlers. If we create what some enthusiasts call friendly AI, which has the goal to take care of humanity as best as possible, we don’t want that system to gradually get bored with humanity the way that a child gets bored with Legos. This is again a technical question, where there are optimistic reasons to think it can be cracked. But it’s a tough one.

To solve these, we need to shift away from just the engineering of intelligence, where we hack something so it kind of works most of the time, to the science of intelligence, where we can prove rigorous things about this. We put a lot of effort into raising our human children so that they should adopt our values and retain them, and we need to put a commensurate effort also into raising our machines.   

~ ~ ~ ~

I spend a lot of time thinking about the future and what we can do to make it as inspiring as possible. I feel technology is crucial here. We humans used to be just passive bystanders to nature. We could witness storms come and go and so on, but we could never do much about it. And gradually, through science, we have figured out more and more about how nature works, and we’ve used this knowledge to enable us to control nature through technology.

Now, I’m quite optimistic that we can create an inspiring future with the technology as long as we win this race between the growing power of the technology and the growing wisdom with which we manage it. But this is going to be a challenge. I’m not optimistic that we’re going to win this race in the sense that I’m optimistic the sun is going to automatically rise tomorrow, regardless of what we do, but I am optimistic that we can do it if we really plan and work hard for it. 

In the past, the way we’ve managed to stay ahead in this wisdom race has typically been by learning from mistakes. We invented fire, screwed up a bunch of times, and then invented the fire extinguisher. We invented cars, screwed up a bunch, and invented the seat belt and the air bag. But by its very nature, the power of our technology keeps growing, and once it gets beyond a certain threshold, when we start getting really powerful technology like nuclear weapons and future superhuman artificial intelligence, we don’t want to learn from mistakes anymore. That’s a terrible strategy. We want to plan ahead, avoid mistakes, and get things right the first time because that’s the only time we’re likely to have.

Some of my friends and colleagues dismiss this kind of talk as luddite scaremongering. I call it safety engineering. Why was it that we managed to put Neil Armstrong, Michael Collins, and Buzz Aldrin on the moon safely and to allow them to come back? It was precisely because of safety engineering. NASA very systematically went through all the possible things that could go wrong not because they were alarmist scaremongers, but because that enabled them to figure out ways of preventing it from happening and getting it right.

I feel we’re asleep at the wheel by and large as a society on this. We really need to take much more seriously the safety engineering mentality for these powerful technologies. 

Somebody recently asked me, "Hey, Max, are you against technology or are you for it?" I asked back, "Well, what about fire? Are you for it, or you against it? Obviously, I’m for fire to keep our homes warm at night, but I’m against using it for arson. Nuclear technology can be used similarly to generate electricity; it can keep the lights on here, and it can be used to start World War III."

I feel that if you look at what happened with nuclear weapons, in the beginning a bunch of idealist scientists got together in the Manhattan Project and built them because they felt it was important America got it before Hitler did. But pretty soon, they stopped being in control over what happened with them. If you look back now in hindsight as to how we’ve managed this technology, there are some good lessons there for mistakes we don’t want to make again with AI and other powerful tech.

With nukes, we absolutely failed epically to think through in advance what the risks were. People used to think that the main risk from a nuclear weapon was just getting blown up by it. They had totally underappreciated the risks of fallout from radiation. By now, the US government has already paid out more than $2 billion in damages to victims of fallout, including in the US from the tests. Two decades later, in the ‘60s, people started to realize there is another little hitch here: the electromagnetic pulse. If you detonate a hydrogen bomb 400 kilometers up above earth’s surface, you might actually ruin electronics and power grids over a large swath of the continent.

It’s pretty sloppy to be unaware of that for so long. Only in the ‘80s, about forty years after nuclear weapons had been built, scientists in both Russia and the US started to realize that there was also a risk of creating nuclear winter, whereby the soot and smoke from as few as a thousand detonations might potentially be able to darken the atmosphere enough that soot and smoke would spread around the earth and trigger a global mini ice age.

Normal smoke clouds from forest fires and things get washed out as soon as it rains on them because that smoke is farther down than some of the rainclouds, but this smoke, in some situations, can make it up to 50 kilometers above where there are any clouds, and then they stay lofted there, absorbing sunlight because they are not totally opaque and they float. They can stay up there for five, ten years.

Recent weather models have suggested there might be a possibility that you could get, say, 20 Celsius lower temperatures during the year after the war, so 40 Fahrenheit—freezing temperatures in July, causing global food collapse, potentially killing most people on earth. And if this lasts for upwards towards a decade, obviously, it could couple in with pandemics, and roving desperate gangs, and all sorts of other problems.

The main point I want to make with this is simply that by the time this even came on the radar as a potential problem, we had already deployed 63,000 hydrogen bombs around the world. So, it would have been good to know about this first, rather than hindsight. And I really hope that when we build more powerful technologies, we do this safety research in advance so that we don’t get blindsided by major potentially negative impacts that we could have thought of earlier. Particularly, so that we can make sure that these ever more powerful technologies become great forces for good.

I use the word "we" a lot here because there are very few people on this planet who would like to see a global thermonuclear war between the US and Russia. It’s very interesting to ask why we nonetheless end up in a situation where we’ve had so many close calls, where we’ve almost had nuclear war by mistake, that we might very well have one this year or next year, and that the probability drops exponentially that we don’t have one over time.

This is not a physics question. This is question that is beautifully answered instead by John Nash, in his Nash equilibrium, where he looks at a game theory perspective on how a group of people, simply by everybody acting in their own self-interest in the moment, can end up in a situation where everybody is worse off. It’s basically the prisoner’s dilemma generalized. Certainly, if we have a global thermonuclear war, it’s one of the worst possible examples of a Nash equilibrium. A Nash equilibrium is just an equilibrium where nobody can get better off by just changing their own behavior, so the only way to get out of it is to figure out mechanisms to collaborate.

I started getting annoyed at myself for using the word "we" so much that I actually made a New Year’s resolution a few years back: I promised myself that I would never allow myself to spend time complaining about things again unless I, personally, spent some real time thinking about what I could do about it. With the nuclear situation, for example, I felt that part of the reason we are in this really poor Nash equilibrium is because most people are unaware of the situation.

I’ve been doing a little casual poll here, asking Uber drivers how many nuclear weapons they think there are on earth. The last two times I asked, once I was told three, the other time I was told seven. In fact, children are not taught about nuclear weapons in the Massachusetts school system. It's not overed at all, not in a science classes, nor in a history classes.

We can’t blame our citizens for not knowing much if those of us who know a lot can’t be bothered to educate others. I’ve been pretty involved with a bunch of other scientists in an effort to try and put accessible information out there, to write some open letters and get the word out that what we should worry most about is not getting nuked by Kim Jong-un or Iran, but by simply falling into a poor Nash equilibrium where America and Russia accidentally end up self-destructing. There’s good historical data to support shedding more light on this helps.

If you look back towards the end of the Cold War, when we cut the nuclear arsenals from 63,000 closer to today’s 14,000, it wasn’t because the Cold War ended. The biggest progress started happening before that, in the mid-80s, and it was precisely when both Ronald Reagan and Gorbachev found out about the science, about nuclear winter, and realized that maybe this doctrine of mutually assured destruction that they had learned about from their advisors was incorrect. Maybe it was self-assured destruction, SAD, where if, for example, Russia managed to hit the US with all their missiles and we never retaliated at all, most Russians would still die because of the nuclear winter.

I just want to end on a final somewhat more optimistic note. In my opinion, the most important thing that has happened with nuclear weapons since the non-proliferation treaty almost forty years ago happened this summer, where 122 nations from around the world met with the United Nations in New York and negotiated and voted yes on this nuclear ban treaty, which is exactly like the bio-weapons ban and the chemical weapons ban. The rationale here was that they felt tricked by the non-proliferation treaty, where the nuclear powers had promised to gradually get rid of theirs and punish additional nuclear states that tried to join if, in return, all these other states that didn’t have the nukes would keep it that way.

Since then, Pakistan, India, and Israel were all out to get nukes and they didn’t get punished, which is against the treaty. Moreover, Putin and Trump are showing absolutely no interest whatsoever in seriously doing what they promised in Article Six. The theory behind this is like the unethical apartheid government in South Africa, where they were obviously never going to give up this power they had unless they were pressured to do so by the majority.

It’s completely naïve to think that the nine nuclear powers are ever going to give up these unethical weapons of mass destruction unless they are pressured to by the rest of the world, by the majority of the world. That’s why this treaty was approved. I noticed, with great interest, that CNN didn’t cover it, Fox News didn’t cover it, The New York Times had it nowhere to be seen on the front page, even though it happened right there in New York. But at least Edge.org is covering it. Good for you, Edge.

It’s going to be very interesting to see if the majority of the Earth’s population can start asserting themselves a little bit more here, because it’s obviously not in the interest of them to have Putin, and Kim Jong-un, and others threatening to take the whole world down with them just because of their own problems.