2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK? [1]

terrence_j_sejnowski's picture [5]
Computational Neuroscientist; Francis Crick Professor, the Salk Institute; Investigator, Howard Hughes Medical Institute; Co-author (with Patricia Churchland), The Computational Brain
Artificial Intelligence Will Make You Smarter

Deep learning is today's hot topic in machine learning. Neural network learning algorithms were developed in the 1980s but computers were slow back then and could only simulate a few hundred model neurons with one layer of "hidden units" between the input and output layers. Learning from examples is an appealing alternative to rule-based AI, which is highly labor intensive. With more layers of hidden units between the inputs and outputs more abstract features can be learned from the training data. Brains have billions of neuron in cortical hierarchies 10-layers deep. The big question back then was how much the performance of neural networks could improve with the size and depth of the network. Not only was much more computer power needed but also a lot more data to train the network.

After 30 years of research, a million times improvement in computer power and vast data sets from the internet we now know the answer to this question: Neural networks scaled up to 12 layers deep with billions of connections are outperforming the best algorithms in computer vision for object recognition and have revolutionized speech recognition. It is rare for any algorithm to scale this well, which suggests that they may soon be capable of solving even more difficult problems. Recent breakthroughs have been made which allow applying deep learning to natural language processing. Deep recurrent networks with short-term memory were trained to translate English sentences into French sentences at high levels of performance. Other deep learning networks could create English captions for the content of images with surprising and sometimes amusing acumen.

Supervised learning using deep networks is a step forward, but still far from achieving general intelligence. The functions they perform are analogous to some capabilities of the cerebral cortex, which has also been scaled up by evolution, but to solve more complex cognitive problems the cortex interacts with many other brain regions.

In 1995 Gerald Tesauro at IBM trained a neural network using reinforcement learning to play backgammon at a world champion level. The network played itself and the only feedback it received was which side won the game. Brains use reinforcement learning to make sequences of decisions toward achieving goals such as finding food under uncertain conditions. Recently, Deep Mind, a company acquired by Google in 2014, used deep reinforcement learning to play seven classic Atari games. The only inputs to the learning system were the pixels on the video screen and the score, the same inputs that humans use. For several of the games their program could play better than expert humans.

What impact will these advances have on us in the near future? We are not particularly good at predicting the impact of a new invention, and it often takes time to find its niche, but we already have one example that can help us understand how this could unfold. When Deep Blue beat Gary Kasparov, the world chess champion in 1997, the world took note that the age of the cognitive machine had arrived. Humans could no longer claim to be the smartest chess players on the planet. Did human chess players give up trying to compete with machines? Quite to the contrary, humans have used chess programs to improve their game and as a consequence the level of play in the world has improved. Since 1997 computers have continued to increase in power and it is now possible for anyone to access chess software that challenges the strongest players. One of the surprising consequences is that talented youth from small communities can now compete with players from the best chess centers.

Magnus Carlsen, from a small town in Norway, is currently the world chess champion with an Elo rating of 2882, the highest in history. Komodo 8 is a commercially available chess program with an estimated rating of 3303.

Humans are not the fastest or the strongest species, but we are the best learners. Humans invented formal schools where children labor for years to master reading, writing and arithmetic, and to learn more specialized skills. Students learn best when an adult teacher interacts with them one-on-one, tailoring lessons for that student. However, education is labor intensive. Few can afford individual instruction, and the assembly-line classroom system found in most schools today is a poor substitute. Computer programs can keep track of a student's performance, and some provide corrective feedback for common errors. But each brain is different and there is no substitute for a human teacher who has a long-term relationship with the student. Is it possible to create an artificial mentor for each student? We already have recommender systems on the Internet that tells us "if you liked X you might also like Y", based on data of many others with similar patterns of preference.

Someday the mind of each student may be tracked from childhood by a personalized deep learning system. To achieve this level of understanding of a human mind is beyond the capabilities of current technology, but there are already efforts at Facebook to use their vast social database of friends, photos and likes to create a theory of mind for every person on the planet. What is created to make a profit from a person could also be used to profit the person.

So my prediction is that as more and more cognitive appliances are devised, like chess-playing programs and recommender systems, humans will become smarter and more capable.