2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

timothy_taylor's picture
Jan Eisner Professor of Archaeology, Comenius University in Bratislava; Author, The Artificial Ape
"Denkraumverlust"

 

The human mind has a tendency to confuse things with their signs. There is a word for this tendency—Denkraumverlust—used by art historian Aby Warburg (1866–1929), and literally translatable as 'loss of thinking space.' Part of the appeal of 'machines that think' is that they would not be subject to this, being more logical than we are. On the other hand, they are unlikely to invent a word or concept such as Denkraumverlust. So what we think about machines that think depends on the type of thinking we're thinking about, but also on what we mean by machine. In the category of 'machines that think,', we are confusing the sign—or representation—of thinking with the thing itself. And, if we tacitly assume that a machine is something produced by humans, we underestimate the degree to which machines produce us, and the fact that thought has long emerged from this interaction, properly belonging to neither side (and thinking there are sides may be wrong too).

Denkraumverlust can help us understand not just the positive response of some Turing testers to conversations with the Russian–Ukrainian computer programme 'Eugene Goostman,' but also the apparently very different case of the murderous response to cartoons depicting Mohammed. Both illustrate how excitable, and even gullible, we can be when presented with a something that appears to represent something else so well that signifier and signified are conflated.

The Turing test requires that a machine be indistinguishable from a human respondent by being able to imitate communication (rather than actually think for itself). But if an enhanced Eugene Goostman insisted that it was thinking its own thoughts, how would we know that it really was? If it knew it was supposed to imitate a human mind, how could we distinguish some conscious pretence from the imitation of pretence? Ludwig Wittgenstein used pretence as a special category in discussing the possibility of knowing the status of other minds, asking us to consider a case where someone believes, falsely, that they are pretending. The possibility of correctly assessing Turing test results in relation to the possibility of independent artificial thought is core Wittgenstein territory: we can deduce that in his view, all assessment must be doomed to failure as it necessarily involves data of an imponderable type.

Denkraumverlust is about unmediated response. Although sophisticated art audiences can appreciate the attempt to fool as part of aesthetic experience (enjoying a good use of three-dimensional perspective on a canvass known to be flat, for example), whenever deception is actually successful, reactions are less comfortable. Cultures regularly censor images thought to have the power to short-circuit our reasoned and reflective responses. Mostly the images are either violent or erotic, but they can also be devotional. Such images, if allowed, can produce a visceral and unmediated reaction appropriate to a real situation. New, unfamiliar representational technologies have a habit of taking us by surprise (when the eighteenth-century French sailors gifted mirrors to aboriginal Tasmanians things got seriously out of order; later anthropologists had similar trouble with photographs).

A classic example of artificially-generated confusion is the legendary sculptor Pygmalion, who fell passionately and inappropriately in love with a statue of a goddess which he had carved himself. In the wake of the Pygmalion myth came classical and medieval Arabic automata so realistic, novel and fascinating in sound and movement that we should probably accept that people, albeit briefly, could be persuaded that they were actually alive. 'Machines that think' are in this Barnum & Bailey tradition. Like Pygmalion's sculpture, they also project an image, albeit not a visual one. Even if they are not dressed up to look like cyborg goddesses, they are representations of us. They are designed to re-present information (often usefully reordered) in terms we find coherent, whether mathematical, statistical, translational or, as in the Turing test, conversational.

But the idea of a thinking machine is a false turn. Such objects, however powerfully they may be enabled to elicit unmediated responses from us, will remain automata. The truly significant developments in thought will arise, as they always have, in a bio-technical symbiosis. This distinctively human story is easy to follow in the body (wheeled transport is one of many mechanical inventions that have enabled human skeletons to become lighter) but is probably just as present in the brain (the invention of writing as a form of external intellectual storage may have reduced selection pressure on some forms of innate memory capacity while stimulating others).

In any case, the separate terms 'human' and 'machine' produce their own Denkraumverlust—a loss of thinking space encouraging us to accept as real an unreal dualism. Practically, it is only the long-term evolution of information technology, from the earliest representations and symbolic constructs to the most advanced current artificial brain, that allows the advancement of thought.