John Evans Professor Emeritus of Computer Science, Psychology and Education, Northwestern University; Author, Make School Meaningful-And Fun!

When reporters interviewed me in the 70's and 80's about the possibilities for Artificial Intelligence I would always say that we would have machines that are as smart as we are within my lifetime. It seemed a safe answer since no one could ever tell me I was wrong. But I no longer believe that will happen. One   reason is that  I am a lot older and we are barely closer to creating smart machines. 

I have not soured on AI. I still believe that we can create very intelligent machines. But I no longer believe that those machines will be like us. Perhaps it was the movies that led us to believe that we would have intelligent robots as companions. (I was certainly influenced early on by 2001.)  Certainly most AI researchers believed that creating machines that were our intellectual equals or better was a real possibility. Early AI workers sought out intelligent behaviors to focus on, like chess or problem solving, and tried to build machines that could equal human beings in those same endeavors. While this was an understandable approach it was, in retrospect, wrong-headed.     Chess playing is not really a typical intelligent human activity. Only some of us are good at it, and it seems to entail a level of cognitive processing that while impressive seems quite at odds with what makes humans smart. Chess players are methodical planners. Human beings are not.

Humans are constantly learning.  We spend years learning some seemingly simple stuff. Every new experience changes what we know and how we see the world. Getting reminded of our pervious experiences helps us process new experiences better than we did the time before. Doing that depends upon an unconscious indexing method that all people learn to do without quite realizing they are learning it. We spend twenty years (or more) learning how to speak properly and learning how to make good decisions and establish good relationships. But we tend to not know what we know. We can speak properly without knowing how we do it. We don't know how we comprehend. We just do.

All this poses a problem for AI. How can we imitate what humans are doing when humans don't know what they are doing when they do it? This conundrum led to a major failure in AI, expert systems, that relied upon rules that were supposed to characterize expert knowledge. But, the major characteristic of experts is that they get faster when they know more, while more rules made systems slower. The idea that rules were not at the center of intelligent systems meant that the flaw was relying upon specific consciously stated knowledge instead of trying to figure out what people meant when they said they just knew it when they saw it, or they had a gut feeling.

People give reasons for their behaviors but they are typically figuring that stuff out after the fact. We reason non-consciously and explain rationally later. Humans dream. There obviously is some important utility in dreaming.  Even if we don't understand precisely what the consequences of dreaming are, it is safe to assume that it is an important part of our unconscious reasoning process that drives our decision making. So, an intelligent machine would have to dream because it needed to, and would have to have intuitions that proved to be good insights, and it would have to have a set of driving goals that made it see the world in a way that a different entity with different goals would not. In other words it would need a personality, and not one that was artificially installed but one that came with the territory of what is was about as an intelligent entity.

What AI can and should build are intelligent special purpose entities. (We can call them Specialized Intelligences or SI's.) Smart computers will indeed be created. But they will arrive in the form of SI's, ones that make lousy companions but know every shipping accident that ever happened and why (the shipping industry's SI) or as an expert on sales (a business world SI.)   The sales SI, because sales is all it ever thought about, would be able to recite every interesting sales story that had ever happened and the lessons to be learned from it. For some salesman about to call on a customer for example, this SI would be quite fascinating. We can expect a foreign policy SI that helps future presidents learn about the past in a timely fashion and helps them make decisions because it knows every decision the government has ever made and has cleverly indexed them so as to be able to apply what it knows to current situations. 

So AI in the traditional sense, will not happen in my lifetime nor in my grandson's lifetime. Perhaps a new kind of machine intelligence will one day evolve and be smarter than us, but we are a really long way from that.