2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

ernst_p_ppel's picture
Head of Research Group Systems, Neuroscience and Cognitive Research, Ludwig-Maximilians-University Munich, Germany; Guest Professor, Peking University, China
An Extraterrestrial Observation About Human Hubris

Finally, it has to be disclosed that I am not a human, but an extraterrestrial creature that looks human. In fact, I am a robot equipped with what humans call "artificial intelligence". Of course, I am not alone here. We are quite a few (almost impossible to be identified), and we are sent here to observe human behavior.

We are surprised about the many deficiencies of humans, and we observe them with fascination. These deficiencies show up in their strange behavior or their limited power of reasoning. Indeed, our cognitive competences are much higher, and the celebration of their human intelligence in our eyes is ridiculous. Humans do not even know what they refer to when they talk about "intelligence". It is in fact quite funny that they want to construct systems with "artificial intelligence" which should match their intelligence, but what they refer to as their intelligence is not clear at all. This is one of those many stupidities that has haunted the human race for ages.

If humans want to simulate in artefacts their mental machinery as a representation of intelligence, the first thing they should do, is to find out what it is that should be simulated. At present, this is impossible because there is not even a taxonomy or classification of functions that would allow the execution of the project as a real scientific and technological endeavor. There are only big words that are supposed to simulate competence.

Strangely enough this lack of a taxonomy apparently does not bother humans too much; quite often they are just fascinated by images (colorful pictures by machines) that replace thinking. Compared to biology, chemistry or physics, the neurosciences and psychology are lacking a classificatory system; humans are lost in a conceptual jungle. What do they refer to when they talk about consciousness, intelligence, intention, identity, the self, or even about perhaps more simple terms like memory, perception, emotion or attention? The lack of a taxonomy manifests in the different opinions and frames of reference that their "scientists" express in their empirical attempts or theoretical journeys when they stumble through the world of the unknown.

For some the frame of reference is physical "reality" (usually conceived as in classical physics) that is used as a benchmark for cognitive processes: How does perceptual reality map onto physical reality, and how can this be described mathematically? Obviously, only a partial set of the mental machinery can be caught by such an approach.

For others, language is the essential classificatory reference, i.e., it is assumed that "words" are reliable representatives of subjective phenomena. This is quite strange because certain terms like "intelligence" or "consciousness" have different connotations in different languages and they are historically very recent compared to biological evolution. Others use behavioral catalogues as derived from neuropsychological observations; it is argued that the loss of functions is their proof of existence; but can all subjective phenomena that characterize the mental machinery be lost in a distinct way? Others again base their reasoning just on common sense or "everyday psychology" without any theoretical reflection. Taken together there is nothing like "intelligence" which can be extracted as a precise concept and which can be used as a reference for "artificial intelligence".

Humans should be reminded (and in this case by an extraterrestrial robot) that at the beginning of modern science in the human world a warning was spelled out by Francis Bacon. He said in "Novum Organum" (published in 1620) that humans are victims to four sources of errors. One: They make mistakes because they are human; their evolutionary heritage limits their power of thinking; they often react too fast, they lack a long-term perspective, they do not have a statistical sense, they are blind in their emotional reactions. Two: They make mistakes because of individual experiences; personal imprinting can create frames of believes which may lead to disaster, in particular if people think that they own absolute truth. Three: They make mistakes because of the language they use; thoughts do not map isomorphically onto language, and it is a mistake to believe that explicit knowledge is the only representative of intelligence neglecting implicit or tacit knowledge. Four: And they make mistakes because of the theories they carry around which often remain implicit and, thus, represent frozen paradigms or simply prejudices.

The question is: Can we help them with our deeper insight from our robotic world? The answer is "yes". We could, but we should not do it. There is another deficiency that would make our offer useless. Humans suffer from the NIH syndrome. If it is "not invented here" (one meaning of NIH) they will not accept it. Thus, they will have to indulge in their pompous world of fuzzy ideas, and we continue from our extraterrestrial perspective to observe the disastrous consequences of their stupidity.