2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

steve_omohundro's picture
Scientist, Self-Aware Systems; Co-founder, Center for Complex Systems Research
2014—A Turning Point in AI And Robotics

 

2014 appears to have been a turning point for AI and robotics. Major corporations invested billions of dollars in these technologies. AI techniques, like machine learning, are now routinely used for speech recognition, translation, behavior modeling, robotic control, risk management, and other applications. McKinsey predicts that these technologies will create more than 50 trillion dollars of economic value by 2025. If this is accurate, we should expect dramatically increased investment soon.

The recent successes are being driven by cheap computer power and plentiful training data. Modern AI is based on the theory of "rational agents" arising from work on microeconomics in the 1940s by von Neumann and others. There is an algorithm for computing the optimal action for achieving a desired outcome but it is computationally expensive. AI systems can be thought of as trying to approximate rational behavior using limited resources. Experiments have found that simple learning algorithms with lots of training data often outperform complex hand crafted models. Today's systems primarily provide value by learning better statistical models and performing statistical inference for classification and decision making. The next generation will be able to explicitly create and improve their own software and are likely to self-improve rapidly.

In addition to improving productivity, AI and robotics are drivers for numerous military and economic arms races. Autonomous systems can be faster, smarter, and less predictable than their competitors. 2014 saw the introduction of autonomous missiles, missile defense systems, military drones, swarm boats, robot submarines, self-driving vehicles, high-frequency trading systems, and cyber defense systems. As these arms races play out, there will be tremendous pressure for rapid system development which may lead to faster deployment than would be otherwise desirable.

2014 also saw an increase in public concern over the safety of these systems.  A study of the likely behavior of these systems by studying approximately rational systems undergoing repeated self-improvement shows that they tend to exhibit a set of natural subgoals called "rational drives" which contribute to the performance of their primary goals. Most systems will better meet their goals by preventing themselves from being turned off, by acquiring more computational power, by creating multiple copies of themselves, and by acquiring greater financial resources. They are likely to pursue these drives in harmful anti-social ways unless they are carefully designed to incorporate human ethical values.

Some have argued that intelligent systems will somehow automatically be ethical. But in a rational system, the goals are completely separable from the reasoning and models of the world. Beneficial intelligent systems are vulnerable to being redeployed with harmful goals. Extremely harmful goals that seek to take control of resources, thwart other agent's goals, or to destroy other agents are unfortunately easy to specify. It will therefore be critical to create a technological infrastructure that detects and controls the behavior of harmful systems.

Some fear that intelligent systems will become so powerful that they are impossible to control. This is not true. These systems must obey the laws of physics and the laws of mathematics. Seth Lloyd's analysis of the computational power of the universe shows that even the entire universe acting as a giant quantum computer could not discover a 500 bit hard cryptographic key in the time since the big bang.

The new technologies of post-quantum cryptography, indistinguishability obfuscation, and blockchain smart contracts are promising components for creating an infrastructure that is secure against even the most powerful AIs. But recent hacks and cyberattacks show that our current computational infrastructure is woefully inadequate for the challenge. We need to develop a software infrastructure that is mathematically provably correct and secure.

There have been at least 27 different species of humans of which we are the only survivors. We survived because we found ways to limit our individual drives and to work together cooperatively. The human moral emotions are an internal mechanism for creating cooperative social structures. Political, legal, and economic structures are an external mechanism for the same purpose.

We need to extend both of these to AI and robotic systems. We need to incorporate human values into their goal systems to create a legal and economic framework that incentivizes positive behavior. If we can successfully manage these systems, they have the potential to dramatically improve virtually every aspect of human life and to provide deep insights into issues like free will, consciousness, qualia, and creativity. We face a great challenge but have tremendous intellectual and technological resources to build upon.