2016 : WHAT DO YOU CONSIDER THE MOST INTERESTING RECENT [SCIENTIFIC] NEWS? WHAT MAKES IT IMPORTANT? [1]

rodney_a_brooks's picture [5]
Panasonic Professor of Robotics (emeritus); Former Director, MIT Computer Science and Artificial Intelligence Lab (1997-2007); Founder, CTO, Robust.AI; Author, Flesh and Machines
Artificial Intelligence

This year there has been an endless supply of news stories, as distinct from news itself, about Artificial Intelligence. Many of these stories concerned the opinions of eminent scientists and engineers who do not work in the field, about the almost immediate dangers of super intelligent systems waking up and not sharing human ethics, and being disastrous for mankind. Others have been from people within the field about the immorality of having AI systems make tactical military decisions. Still others have been from various car manufacturers about the imminence of self-driving cars on our roads. Yet others have been from philosophers (amateur and otherwise) about how such self driving cars will have to make life and death decisions.

My own opinions on these topics are counter to the popular narrative, and mostly I think everyone is getting way ahead of himself or herself. Arthur C. Clarke's third law was that any sufficiently advanced technology is indistinguishable from magic. All of these news stories, and the experts who are driving them, seem to me to be jumping so far ahead of the state of the art in Artificial Intelligence, that they talk about a magic future variety of it, and as soon as magic is involved any consequence one desires, or fears, can easily be derived.

There has also been a lot of legitimate news on Artificial Intelligence during 2015. Most of it centers around the stunning performance of deep learning algorithms, the back propagation ideas of the mid-1980's now extended by better mathematics to many more than just three network layers, and extended in computational resources by the massive computer clouds maintained by West Coast US tech titans, and also by the clever use of GPU's (Graphical Processing Units) within those clouds.

The most practical immediate effect of deep learning is that speech understanding systems are so noticeably better than just two or three years ago, enabling new services on the web or on our smart phones and home devices. We can easily talk to them now and have them understand us. The frustrations of using speech interfaces of five years ago are completely gone.

The success of deep learning has, I believe, led many people towards wrong conclusions. When a person displays a particular performance in some task, translating text from a foreign language, say, we have an intuitive understanding of how to generalize to what sort of competence the person has. For instance, we know that the person understands that language, and could answer questions about which of the people in a story about a child dying in a terrorist attack, say, were sad, which will mourn for months, and which felt they had achieved their goals. But the translation program likely has no such depth of understanding. One cannot apply the normal generalization from performance to competence that works for people to make similar generalizations for Artificial Intelligence programs.

Towards the end of the year we have started to see a trickle of news stories that are running counter to the narrative of runaway success of Artificial Intelligence. I welcome these stories as they strike me as bringing some reality back to the debates about our future relationship to AI. And there are two sorts of stories we have started to see.

The first class of stories is about the science, where many researchers are now vocally pointing out that there is a lot more science to be done in order to come up with learning algorithms that mimic the broad capabilities of humans and animals. Deep learning by itself will not solve many of the learning problems that are necessary for general Artificial Intelligence, for instance where spatial or deductive reasoning is involved. Further, all the breakthrough results we have seen in AI have been years in the making, and there is no scientific reason to expect there to be a sudden and sustained series of them, despite the enthusiasm from young researchers who were not around of the last three waves of such predictions in the 1950's, 1960's, and 1980's.

The second class of stories is about how self-driving cars and drivers of other cars interact. When large physical kinetic masses are in close proximity to human beings, the rate of adoption has been much slower than that, say, of Java Script in web browsers. There has been a naive enthusiasm that fully self-driving cars will soon be deployed on public roads. The reality is that there will be fatal accidents (even things built by incredibly smart people, sometimes blow up) that will cause irrational levels of caution when compared to the daily death toll world wide of more than 3,000 automobile fatalities caused by people. But the most recent news stories are documenting the high accident rate of self-driving cars under test. So far, all are minor accidents, and all are attributable to errors on the part of the other driver, the human. The cars are driving perfectly, goes the narrative, and not breaking the law like all humans do, so it is the humans that are at fault. When you are arguing that those pesky humans just don't get a technology, you have already lost the argument. There is a lot more work to be done before self-driving cars will ever be let loose in environments where ordinary people are also driving, no matter how shiny the technology seems to the engineers and VCs who are building it.

The over-hype in the news of AI from 2014 and 2015 is finally getting met with a little pushback. There will be a lot of screams of indignation from true believers, but eventually this bubble will fade into the past. At the same time we will gradually see more and more effective uses of AI in all our lives, but it will be slow and steady, and not explosive, and not existentially dangerous.