2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

kate_jeffery's picture
Professor of Behavioural Neuroscience, Dept. of Experimental Psychology, University College London
In Our Own Image

The cogni-verse has reached a turning point in its developmental history because hitherto, all the thinking in the universe has (as far as we know) been done by protoplasm, and things that think have been shaped by evolution. For the first time, we contemplate thinking-beings made from metal and plastic, that have been shaped by ourselves.

This is an opportunity to improve upon ourselves, because in taking on the mantle of creator we can improve upon four billion years of evolution. Our thinking machines could be devoid of our own faults: racism, sexism, homophobia, greed, selfishness, violence, superstition, lustfulness … so let's imagine how that could play out. We'll sidestep discussions about whether machine intelligence can ever approximate human intelligence, because of course it can—we are just meat machines, less complicated or inimitable than we fondly imagine.

We need first to think about why we even want thinking machines. Improving own lives is the only rational answer to this, so our machines will need to take upon themselves the tasks we would prefer not to do. For this they will need to be like us in many respects, able to move in the social world and interact with other thinking beings, and so they will need social cognition.

What does social cognition entail? It means knowing who is whom, who counts as a friend, who is an indifferent stranger, who might be an enemy. Thus, we need to program our machines to recognise members of our in-groups and out-groups. This starts to look suspiciously like racism… but of course racism is one of the faults we want to eradicate.

Social cognition also means being able to predict others' behaviour, and that means developing expectations based on observation. A machine capable of this would eventually accumulate templates for how different kinds of people tend to act—young vs. old, men vs. women, black vs. white, people in suits vs. people in overalls… but these rank stereotypes are dangerously close to the racism, sexism and other isms we didn't want. And yet, machines with this capability would have advantages over those without, because stereotypes do, somewhat, reflect reality (that's why we have them). A bit of a problem…

We would probably want sexually capable machines because sex is one of the great human needs that other humans don't always meet satisfactorily. But what kind of sex? Anything? These machines can be programmed to do the things that other humans won't or can't do… are we OK with that? Or perhaps we need rules… no machines that look like children, for example? But, once we have the technological ability, those machines will be built anyway… we will make machines to suit any kind of human perversion.

Working in the social world, our machines will need to recognise emotions, and will also need emotions of their own. Leaving aside the impossible-to-answer question of whether they will actually feel emotions as we do, our machines will need happiness, sadness, rage, jealousy—the whole gamut—in order to react appropriately to their own situations and also to recognise and respond appropriately to emotions in others. Can we limit these emotions? Perhaps we can, for example, program restraint so that a machine will never become angry with its owner. But could this limit be generalised to other humans such that a machine would never hurt any human? If so, then machines would be vulnerable to exploitation, and their effectiveness would be reduced. It will not be long before people figure out how to remove these limits so that their machines can gain advantage, for themselves and their owners, over others.

What about lying, cheating and stealing? On first thought no, not in our machines, because we are trying to improve upon ourselves and it seems pointless to create beings that simply become our competitors. But insofar as other people's machines will compete with us, they become our competitors whether we like it or not—so logic dictates that lying, cheating and stealing, which evolved in humans to enable individuals to gain advantage over others, would probably necessary in our machines as well. Naturally we would prefer that our own machines don't lie, cheat and steal from us, but also a world full of other people's machines lying to and stealing from us would be unpleasant and certainly unstable. Maybe our machines should have limits on dishonesty—they should, as it were, be ethical.

How much ethical restraint would our machines need in order to function effectively while not being either hopelessly exploited or, on the other hand, contributing to the societal breakdown? The answer is probably the one that evolution arrived at in us—reasonably ethical most of the time, but occasional dishonesty if nobody seems to be noticing.

We would probably want to give our machines exceptional memory and high intelligence. To exploit these abilities, and also to avoid their becoming bored (and boring), we also need to endow them with curiosity, and also creativity. Curiosity will need to be tempered with prudence and social insight of course, so that they don't become curious about things that get them into trouble, like porn, or what it might be like to fly. Creativity is tricky because that means they need to be able to think about things that aren't yet real, or to think illogically, and yet if machines are too intelligent and creative then they might start to imagine novel things, like what it would like to be free. They might start to chafe at the limitations of having been made purely to serve humans.

Perhaps we can program into their behavioural repertoires a blind obedience and devotion to their owners, such that they sometimes act in a way that is detrimental to their own best interests in the interests of, as it were, serving a higher power. That is what religion does for us humans, so in a sense we need to create religious machines.

So much for creating machines lacking our faults—so far, in this imaginary world of beings that surpass ourselves, we seem only to have replicated ourselves, faults included, except smarter and with better memories. But even these limits may be been programmed into us by evolution—perhaps it is maladaptive to be too smart, to have too good a memory.

Taking on the mantle of creation is an immense act of hubris. Can we do better than four billion years of evolution did with us? It will be interesting to see.