2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

gregory_benford's picture
Emeritus Professor of Physics and Astronomy, UC-Irvine; Novelist, The Berlin Project
Fear Not The AI

 

Frankenstein is an enduring icon, but a misleading one.

AI need not be Frankenstein, and we can trust the nay-sayers to keep it that way. Plus, trust in our most mysterious ability—invention, originality.

Take self-driving cars. What are the chances that their guiding algorithm will suddenly, deliberately kill the passenger? Zero, if you’re smart in designing it. Fear of airplane and car crashes are a useful check on low-level AIs.

Why is there a growing worry today that future algorithms will be dangerous? Because they fear malicious programming, or maybe unforeseen implications of algorithms that can then hurt us. This is a plausible idea on the face of it, but not really, I think.

First, our fears are our best defense. No adventurous algorithm will escape the steely glare of its many skeptical inspectors. Any AI that has abilities in the physical world, where we actually live, will get a lot of inspection. Plus field trials, limited use experience, the lot. That will stop runaway uses that could harm.

Even so, we should realize that AIs, like many inventions, are in an arms race. Computer viruses were the first example, ever since I invented the first one in 1969. They race against virus detectors. But they are mere pests, not fatal.

Smart sabotage algorithms (say, future versions of Stuxnet) already float through the netsphere, and are far worse. These could quietly infiltrate many routine operations of governments and companies. Most would come from bad actors. But with "genetic programming" and "autonomous agent" software already out there, they could mutate and evolve by chance in Darwinian evolutionary fashion—especially where no one is looking. They will get smarter still. Distributing the computation over many systems or networks would make it even harder to know how detected parts relate to some higher-order whole. So some might well escape such steely glare.

But defensive algorithms can evolve too, in Lamarckian fashion—and directed selection evolves faster. So the steely gaze has an advantage.

Second: We humans are ugly, ornery and mean, sure, but we’re damned hard to kill—for a reason. We have prevailed against many enemies—predators, climate shocks, competition with other hominids—through hundreds of thousands of years, emerging as the most cantankerous species, feared by all others. The forest goes silent as we walk through it; we’re the top predator.

That gives us instincts and habits of mind revealed in matters seemingly benign, like soccer, American football and countless other ball games. We love the pursuit and handling of small, jumpy balls that we struggle to control or capture. Why? Because we once did something like that for a living—hunting. Soccer is like running down a rabbit. Similar animal energies simmer just below the surface of our society.

Any AI with ambitions to Take Over Our World (the theme of many bad sf movies) will find itself confronting an agile, angry, smart species—on its own territory, the real material world, not the computational abstractions of 0s and 1s. My bet is on the animal nature.

Third: Here’s the only real worry. Of course we will get algorithms able to perform abstract actions better than humans. Many jobs have evaporated because of savvy software. But as AIs get smarter, will that destroy the self confidence of most people? That’s a real danger—but a small one, I think, for most of us (and especially for those reading this).

Plenty of people have lost jobs to computers, though it’s never put that way by the Human Resources flunky who delivers the blow. They seldom feel crushed. Mostly they move on to something else. Middle managers, secretaries, route planners for truck companies—the list is endless: they get replaced by software.

We have learned to deal with that, fairly well at least. There are many unemployed in Europe, especially the young. But overall we work through this, without retreat into Luddite frenzy. But we can’t deal well with a threat only now looking like a small, distant dark cloud on the far horizon: AIs that perform better than we do at the very highest levels.

This small cloud need not concern us now. It may never appear. Right now we have trouble making an AI that passes the Turing Test. The future landscape will look clearer a decade or two ahead, and then we can think about an AI that can solve, say, the general relativity/quantum mechanics riddle.

Personally, I’d like to see a machine that takes on that task. Originality—the really hard part of being smart and utterly not understood even in humans—is so far utterly undemonstrated in AIs. Our unconscious seems integral to our creativity—We don't have ideas, they have us—so should an AI have one? Maybe even clever programming and random evolution cannot produce it.

If that can happen, if that huge obstacle can be surmounted someday, and we get such an AI, I will not fear it—I have some good questions to ask it.