2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK? [1]

s_abbas_raza's picture [5]
Founding Editor, 3QuarksDaily.com
The Values Of Artificial Intelligence

The rumors of the enslavement or death of the human species at the hands of an artificial intelligence are highly exaggerated, because they assume that an AI will have a teleological autonomy akin to our own. I don't think anything less than a fully Darwinian process of evolution can give that to any creature.

There are basically two ways in which we could produce an AI. The first is by writing a comprehensive set of programs that can perform specific tasks that human minds can perform, perhaps even faster and better than we can, without worrying about exactly how humans perform those tasks, and then bringing those modules together into an integrated intelligence. We have already started this project and succeeded in some areas. For example, computers can play chess better than humans. One can imagine that with some effort it may well be possible to program computers to perform even more creative tasks, such as writing beautiful (to us) music or poetry with some clever heuristics and built-in knowledge.

But here's the problem with this approach: We deploy our capabilities according to values and constraints programmed into us by billions of years of evolution (and some learned during our lifetimes), and we share some of these values with the earliest life-forms, including, most important, the need to survive and reproduce. Without these values, we would not be here, and we would not have the finely tuned (to our environment) emotions that allow us not only to survive but also to cooperate with others. The importance of this value-laden emotional side of our minds is made obvious by, among other things, the many examples of perfectly rational individuals who cannot function in society because of damage to the emotional centers of their brains.

So, what values and emotions will an AI have? One could simply program such values into an AI, in which case we choose what the AI will "want" to do, and we needn't worry about the AI pursuing goals that diverge from ours. We could easily make the AI unable to modify certain basic imperatives we give it. (Yes, something like a more comprehensive version of Isaac Asimov's Laws of Robotics.)

The second way to produce an AI is by deciphering in detail how the human brain works. It's conceivable that there may soon come a eureka moment about the structure and conceptual hierarchy of the brain—similar to Watson and Crick and Franklin and Wilkins's discovery of the structure of DNA and the subsequent rapid understanding of the hereditary mechanism. We might simulate or reproduce that functional structure on silicon, or some other substrate, as a mixture of hardware and software.

At first blush, this may seem a convenient way to quickly bestow on an AI the benefit of our own long period of evolution, as well as a way to give it values of its own by functionally reproducing the emotional centers of our own brain, along with the "higher thought" parts, like the cortex. But our brains are specifically designed to accept information from the vast sensory apparatus of our bodies and to react to this. What would the equivalent be for an AI? Even given a sophisticated body with massive sensory capability, what an AI would need to survive in the world is presumably very different from what we need. It could achieve some emotional tuning from interacting with its environment, but what it would need to develop true autonomy and desires of its own would be nothing short of a long process of evolution entailing the Darwinian requirements of reproduction with variability and natural selection. This it won't have, because we are not speaking of artificial life here. So, again, we'll end up giving it whatever values we choose for it.

It's of course, conceivable that someone will produce intelligent robots as weapons (or soldiers) to be used against other humans in war, but these weapons will simply carry out the intentions of their creators and, lacking any will or desire of their own, will not pose a threat to humanity at large any more than any other weapons already do. So both potential roads to an AI (at least, ones achievable on a less-than-geological timescale) will fail to give that AI the purposive autonomy, free of the intentionality of its creators, that might actually threaten them.