2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

max_tegmark's picture
Physicist, MIT; Researcher, Precision Cosmology; Scientific Director, Foundational Questions Institute; President, Future of Life Institute; Author, Life 3.0
Let's Get Prepared!

 

To me, the most interesting question about artificial intelligence isn't what we think about it, but what we do about it.

In this regard, at the newly formed Future of Life Institute, we are engaging many of the world’s leading AI researchers to discuss the future of the field. Together with top economists, legal scholars and other experts, we are exploring all the classic questions:

—What happens to humans if machines gradually replace us on the job market? 

—When, if ever, will machines outcompete humans at all intellectual tasks? 

—What will happen afterward? Will there be a machine intelligence explosion leaving us far behind, and if so, what, if any, role will we humans play after that?

There's a great deal of concrete research that needs to be done right now for ensuring that AI systems become not only capable, but also robust and beneficial, doing what we want them to do. 

Just as with any new technology, it's natural to first focus on making it work. But once success is in sight, it becomes timely to also consider the technology's societal impact, and research how to reap the benefits while avoiding potential pitfalls. That's why after learning to make fire, we developed fire extinguishers and fire safety codes. For more powerful technologies such as nuclear energy, synthetic biology and artificial intelligence, optimizing the societal impact becomes progressively more important. In short, the power of our technology must be matched by our wisdom in using it.

Unfortunately, the necessary calls for a sober research agenda that's sorely needed is being nearly drowned out by a cacophony of ill-informed views that permeate the blogosphere. Let me briefly catalog the loudest few.

1) Scaremongering: Fear boosts ad revenues and Nielsen ratings, and many journalists appear incapable of writing an AI-article without a picture of a gun-toting robot. I encourage you to read our open letter for yourself and muse over how it could, within a day, be described by media as "apocalyptic" and "warning of a robot uprising."

2) "It's impossible": As a physicist, I know that my brain consists of quarks and electrons arranged to act as a powerful computer, and that there's no law of physics preventing us from building even more intelligent quark blobs.

3) "It won't happen in our lifetime": We don't know what the probability is of machines reaching human-level ability on all cognitive tasks during our lifetime, but most of the AI researchers at the conference put the odds above 50%, so we would be foolish to dismiss the possibility as mere science fiction.

4) "Machines can't control humans": humans control tigers not because we are stronger, but because we are smarter, so if we cede our position as smartest on our planet, we might also cede control. 

5) "Machines don't have goals": Many AI systems are programmed to have goals and to attain them as effectively as possible.

6) "AI isn't intrinsically malevolent:" Correct—but its goals may one day clash with yours. Humans don't generally hate ants—but if we want to build a hydroelectric dam and there's an anthill there, too bad for the ants.

7) "Humans deserve to be replaced": Ask any parent how they would feel about you replacing their child by a machine, and whether they'd like a say in the decision.

8) "AI worriers don't understand how computers work": This claim was mentioned at the conference, and the assembled AI researchers laughed hard.

Let's not let the loud clamor about these red herrings distract from the real challenge: The impact of AI on humanity is steadily growing, and to ensure that this impact is positive, there are very difficult research problems that we need to buckle down and work on together. Because they are interdisciplinary, involving both both society and AI, they require collaboration between researchers in many fields. Because they are hard, we need to start working on them now.

First we humans discovered how to replicate some natural processes with machines, making our own wind, lightning, and mechanical horse power. Gradually, we realized that our bodies were also machines, and the discovery of nerve cells began blurring the borderline between body and mind. Then we started building machines that could outperform not only our muscles, but our minds as well. So while discovering what we are, will we inevitably make ourselves obsolete?

The advent of machines that truly think will be the most important event in human history. Whether it will be the best or worst thing ever to happen to humankind depends on how we prepare for it, and the time to start preparing is now. One doesn't need to be a superintelligent AI to realize that running unprepared toward the biggest event in human history would be just plain stupid.