Identifying The Principles, Perhaps The Laws, Of Intelligence

Annual Question: 

The most important news for me came in mid-2015, when three scientists, Samuel J. Gershman, Eric J. Horvitz, and Joshua Tenenbaum published “Computational rationality: A converging paradigm for intelligence in brains, minds, and machines” in Science, 17 July 2015. They announced that they and their colleagues had something new underway: an effort to identify the principles, perhaps the laws, of intelligence, just as Newton once discovered the laws of motion.

Formerly, any commonalities among a stroll in the park, the turbulence of a river, the revolution of a carriage wheel, the trajectory of a cannon ball, or the paths of the planets, seemed preposterous. It was Newton who found the underlying generalities that explained each of them (and so much more) at a fundamental level.

Now comes a similarly audacious pursuit to subsume under general principles, perhaps even laws, the essence of intelligence wherever it’s found. “Truth is ever to be found in simplicity, and not in the multiplicity and confusion of things,” Newton said.

So far as intelligence goes, we are pre-Newtonian. Commonalities of intelligence shared by cells, dolphins, plants, birds, robots and humans seem, if not preposterous, at least far-fetched.

Yet rich exchanges among artificial intelligence, cognitive psychology, and the neurosciences, for a start, aim exactly toward Newton’s “truth in simplicity,” those underlying principles (maybe laws) that will connect these disparate entities together. The pursuit’s formal name is computational rationality. What is it exactly, we ask? Who, or what, exhibits it?

 The pursuit is inspired by the general agreement in the sciences of mind that intelligence arises not from the medium that embodies it—whether biological or electronic—but the way interactions among elements in the system are arranged. Intelligence begins when a system identifies a goal, learns (from a teacher, a training set, or an experience) and then moves on autonomously, adapting to a complex, changing environment. Another way of looking at this is that intelligent entities are networks, often hierarchies of intelligent systems, humans certainly among the most complex, but congeries of humans even more so.

The three scientists postulate that three core ideas characterize intelligence. First, intelligent agents have goals, and form beliefs and plan actions that will best reach those goals. Second, calculating ideal best choices may be intractable for real-world problems, but rational algorithms can come close enough (“satisfice” in Herbert Simon’s term) and incorporate the costs of computation. Third, these algorithms can be rationally adapted to the entity’s specific needs, either off-line through engineering or evolutionary design, or online through meta-reasoning mechanisms that select the best strategy on the spot for a given situation.

Though barely begun, the inquiry into computational rationality is already large and embraces multitudes. For example, biologists now talk easily about cognition, from the cellular to the symbolic level. Neuroscientists can identify computational strategies shared by both humans and animals. Dendrologists can show that trees communicate with each other (slowly) to warn of nearby enemies, like wood beetles: activate the toxins, neighbor.

The humanities themselves are comfortably at home here too, though it’s taken many years for most of us to see that. And of course here belongs artificial intelligence, a key illuminator, inspiration, and provocateur.

It’s news now; it will stay news because it’s so fundamental; its evolving revelations will help us see our world, our universe, in a completely new way. And for those atremble at the perils of super-intelligent entities, surely understanding intelligence at this fundamental level is one of our best defenses. 


[ Sat. Dec. 5. 2015 ]