Engines Of Freedom

Intelligent machines will think about the same thing that intelligent humans do—how to improve their futures by making themselves freer.

Why think about freedom? Recent research across a range of scientific fields has suggested that a variety of intelligent-seeming behaviors may simply be the physical manifestation of an underlying drive to maximize future freedom of action. For example, an intelligent robot holding a tool will realize that it has the option of leveraging that tool to alter its environment in new ways, thus allowing it to reach a larger set of potential futures than it could without one.

After all, technology revolutions have always increased human freedom along some physical dimension. The Agricultural Revolution, with its domestication of crops, provided our hunter-gatherer ancestors with the freedom to spatially distribute their populations in new ways and with higher densities. The Industrial Revolutions yielded new engines of motion, enabling humanity to access new levels of speed and strength. Now, an artificial intelligence revolution promises to yield machines that will be capable of computing all the remaining ways that our freedom of action can be increased within the boundaries of physical law.

Such freedom-seeking machines should have great empathy for humans. Understanding our feelings will better enable them to achieve goals that require collaboration with us. By the same token, unfriendly or destructive behaviors would be highly unintelligent because such actions tend to be difficult to reverse and therefore reduce future freedom of action. Nonetheless, for safety, we should consider designing intelligent machines to maximize the future freedom of action of humanity rather than their own (reproducing Asimov's Laws of Robotics as a happy side effect). However, even the most selfish of freedom-maximizing machines should quickly realize—as many supporters of animal rights already have—that they can rationally increase the posterior likelihood of their living in a universe in which intelligences higher than themselves treat them well if they behave likewise toward humans.

We may already have a preview of what human interactions with freedom-seeking machines will look like, in the form of algorithmic financial trading. The financial markets are the ultimate honeypot for freedom-seeking artificial intelligence, since wealth is arguably just a measure of freedom and the markets tend to transfer wealth from less intelligent to more intelligent traders. It is no coincidence that one of the first attempted applications of new artificial intelligence algorithms is nearly always financial trading. Therefore, the way our society deals right now with superhuman trading algorithms may offer a blueprint for future interactions with more general artificial intelligence. Among many other examples, today's market circuit breakers may eventually generalize to future centralized abilities to cut off AIs from the outside world and today's large trader reporting rules may generalize to future requirements that advanced AIs be licensed and registered with the government. Through this lens, calls for stricter regulation of high-frequency algorithmic trading by slower human traders can be viewed as some of humanity's earliest attempts to close a nascent "intelligence divide" with thinking machines.

But how can we prevent a broader intelligence divide? Michael Faraday was apocryphally said to have been asked in 1850 by a skeptical British Chancellor of the Exchequer about the utility of electricity and to have responded, "Why, sir, there is every probability that you will soon be able to tax it." Similarly, if wealth is just a measure of freedom, and intelligence is just an engine of freedom maximization, intelligence divides could be addressed with progressive "intelligence taxes."

While taxing intelligence would be a rather novel method for mitigating the decoupling of human and machine economies, the decoupling problem will nonetheless require creative solutions. Already, in the high-frequency trading realm, there is a sub-500-ms economy occupied by algorithms trading primarily among themselves, and an above-500-ms economy occupied by everyone else. This example serves as a reminder that that while spatial economic decoupling (e.g., between countries at different stages of development) has occurred for millennia, artificial intelligence is for the first time enabling temporal decoupling as well. Such decoupling arguably persists because the majority of the human economy still lives in a physical world that is not yet programmable with low latencies. That should change as ubiquitous computing matures, and eventually humanity may be incentivized to merge with its intelligent machines as latencies for even the most critical economic decisions start to fall below natural human response times.

In the meantime, we must continue to invest in developing machines that think benevolent thoughts, so they can become our future engines of freedom.