Journalist; Author, Head in the Cloud; Nominated twice for the Pulitzer Prize
Can Submarines Swim?

My favorite Edsger Dijkstra aphorism is this one: "The question of whether machines can think is about as relevant as the question of whether submarines can swim." Yet we keep playing the imitation game: asking how closely machine intelligence can duplicate our own intelligence, as if that is the real point. Of course, once you imagine machines with human-like feelings and free will, it's possible to conceive of misbehaving machine intelligence—the AI as Frankenstein idea. This notion is in the midst of a revival, and I started out thinking it was overblown. Lately I have concluded it's not.

Here's the case for overblown. Machine intelligence can go in so many directions. It is a failure of imagination to focus on human-like directions. Most of the early futurist conceptions of machine intelligence were wildly off base because computers have been most successful at doing what humans can't do well. Machines are incredibly good at sorting lists. Maybe that sounds boring, but think of how much efficient sorting has changed the world.

In answer to some of the questions brought up here, it is far from clear that there will ever be a practical reason for future machines to have emotions and inner dialog; to pass for human under extended interrogation; to desire, and be able to make use of, legal and civil rights. They're machines, and they can be anything we design them to be.

But that's the point. Some people will want anthropomorphic machine intelligence. How many videos of Japanese robots have you seen? Honda, Sony, and Hitachi already expend substantial resources in making cute AI that has no concrete value beyond corporate publicity. They do this for no better reason than tech enthusiasts have grown up seeing robots and intelligent computers in movies.

Almost anything that is conceived—that is physically possible and reasonably cheap—is realized. So human-like machine intelligence is a meme with manifest destiny, regardless of practical value. This could entail nice machines-that-think, obeying Asimov's laws. But once the technology is out there, it will get ever cheaper and filter down to hobbyists, hackers, and "machine rights" organizations. There is going to be interest in creating machines with will, whose interests are not our own. And that's without considering what machines that terrorists, rogue regimes, and intelligence agencies of the less roguish nations, may devise. I think the notion of Frankensteinian AI, which turns on its creators, is something worth taking seriously.