2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

gerald_smallberg's picture
Practicing Neurologist, New York City; Playwright, Off-Off Broadway Productions, Charter Members; The Gold Ring
Machines Will Always Lack Feeling Or Emotion

My thinking about this year's question is tempered by the observation made by Mark Twain in A Connecticut Yankee in King Arthur's Court: "A genuine expert can always foretell a thing that is five hundred years away easier than he can a thing that's only five hundred seconds off." Twain was being generous: Forget the five hundred seconds; we will never know with certainty even one second into the future. However, man does have the ability to try to contemplate the future that provided Homo sapiens its great evolutionary advantage. This talent to imagine a future before it occurs has been the engine of progress, the source of creativity.

We have built machines that in simplistic ways are already "thinking" by solving problems or are performing tasks that we have designed. At this point, they are subject to algorithms that follow rules of logic, whether it be "crisp" or "fuzzy." Despite its vast memory, and its increasingly advanced processing mechanisms, this intelligence is still primitive. In theory, as these machines become more sophisticated, they will at some point attain a form of consciousness defined for the purpose of this discussion as the ability to be aware of being aware. Most likely by combining the properties of both silicon and carbon, with digital and analogue parallel processing, possibly even quantum computing, with networks that incorporate time delay, they will ultimately accomplish this most miraculous feat.

Its form of consciousness, however, will be devoid of subjective feelings or emotions. There are those who argue that feelings are triggered by the thoughts and images that have become paired with a particular emotion. Fear, joy, sadness, anger, and lust are examples of emotions. Feelings can include contentment, anxiety, happiness, bitterness, love, and hatred. My opinion is that machines will lack this aspect of consciousness is based on two considerations.

The first is appreciating how we arrived with the ability to feel and have emotions. As human beings, we are the end product of evolution by natural selection that arose in its most primitive organisms approximately 3.5 billion years ago. Over this vast eon of time, we are not unique in the animal kingdom to experience feelings and emotions. Over the last 150,000 to 300,000 years our species, Homo sapiens, is singular in having evolved the ability to use language and symbolic thought as part of how we reason in order to make sense of our experiences and view the world we inhabit. 

Feeling, emotion, and intellectual comprehension are inexorably intertwined with how we think. Not only are we aware of being aware, but also our ability to think enables us at will to remember a past and to imagine a future. Using our emotions, feelings, and reasoned thoughts, we can form a "theory of mind," so that we can understand the thinking of other people, which in turn enabled us to share knowledge as we created societies, cultures, and civilizations.

The second consideration is that machines are not organisms and no matter how complex and sophisticated they become, they will not evolve by natural selection. By whatever means machines are designed and programmed, their possessing the ability to have feelings and emotions would be counter-productive to what will make them most valuable.

The driving force for more advanced intelligent machines will be the need to process and analyze the incomprehensible amount of information and data that will become available to help us ascertain what is likely to be true from what is false, what is relevant from what is irrelevant. They will make predictions, since they too will have the ability to peer into the future while waiting, as will always be the case, for its cards to be revealed. They will have to be totally rational agents in order to do these tasks with accuracy and reliability. In their decision analysis, a system of moral standards will be necessary.

Perhaps it will be some calculus incorporating such utilitarian principles as the "the greatest happiness of the greatest number is the measure of right and wrong" with the Golden Rule, the foundational precept that underlies many religions: "One should treat others as one would like to treat oneself." If feelings and emotions introduced subjective values, this would be a self-defeating strategy to solving the complex problems that we will continue to face as we try to weigh what is best for our own species, along with the rest of life we share with our planet.

My experience as a clinical neurologist makes me partial to believing that we will be unable to read machines' thoughts, but also they will be incapable of reading ours. There will be no shared theory of mind. I suspect the closest we can come to knowing this most complex of states is indirectly by studying the behavior of these super-intelligent machines. In this context, they will have crossed that threshold when they start to replicate themselves and look for a source of energy solely under their control. If this should occur, and if I am still around—a highly unlikely expectation—my judgment about whether this poses a utopian or dystopian future will be based upon thinking, which will be biased as always, since it will remain a product of analytical reasoning, colored by my feelings and emotions.