2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK? [1]

mark_pagel's picture [5]
Professor of Evolutionary Biology, Reading University, UK; Fellow, Royal Society; Author, Wired for Culture
Machines That Can Think Will Do More Good Than Harm

There is no reason to believe that as machines become more intelligent—and intelligence such as ours is still little more than a pipe-dream—they will become evil, manipulative, self-interested or in general, a threat to humans. Self-interest is a property of things that 'want' to stay alive (or more accurately, that want to reproduce), and this is not a natural property of machines—computers don’t mind, much less worry, about being switched off.

So, full-blown artificial intelligence (AI) will not spell the 'end of the human race', it is not an 'existential threat' to humans (digression: this now-common use of 'existential' is incorrect), we are not approaching some ill-defined apocalyptic 'singularity', and the development of AI will not be 'the last great event in human history'—all claims that have recently been made about machines that can think.

In fact, as we design machines that get better and better at thinking, they can be put to uses that will do us far more good than harm. Machines are good at long, monotonous tasks like monitoring risks, they are good at assembling information to reach decisions, they are good at analyzing data for patterns and trends, they can arrange for us to use scarce or polluting resources more efficiently, they react faster than humans, they are good at operating other machines, they don’t get tired or afraid, and they can even be put to use looking after their human owners, as in the form of smartphones with applications like Siri and Cortana, or the various GPS route-planning devices most people have in their cars.

Being inherently self-less rather than self-interested, machines can easily be taught to cooperate, and without fear that some of them will take advantage of the other machines’ goodwill. Groups (packs, teams, bands, or whatever collective noun will eventually emerge—I prefer the ironic jams) of networked and cooperating driverless cars will drive safely nose-to-tail at high-speeds: they won’t nod off, they won’t get angry, they can inform each other of their actions and of conditions elsewhere, and they will make better use of the motorways, which now are mostly unoccupied space (owing to humans' unremarkable reaction times). They will do this happily and without expecting reward, and do so while we eat our lunch, watch a film, or read the newspaper. Our children will rightly wonder why anyone ever drove a car.

There is a risk that we will, and perhaps already have, become dangerously dependent on machines, but this says more about us than them. Equally, machines can be made to do harm, but again, this says more about their human inventors and masters than about the machines. Along these lines, there is a strand of human influence on machines that we should monitor closely and that is introducing the possibility of death. If machines have to compete for resources (like electricity or gasoline) to survive, and they have some ability to alter their behaviours, they could become self-interested. 

Were we to allow or even encourage self-interest to emerge in machines, they could eventually become like us: capable of repressive or worse, unspeakable, acts towards humans, and towards each other. But this wouldn’t happen overnight, it is something we would have to set in motion, it has nothing to do with intelligence (some viruses do unspeakable things to humans), and again says more about what we do with machines than machines themselves.

So, it is not thinking machines or AI per se that we should worry about but people. Machines that can think are neither for us nor against us, and have no built-in predilections to be one over the other. To think otherwise is to confuse intelligence with aspiration and its attendant emotions. We have both because we are evolved and replicating (reproducing) organisms, selected to stay alive in often cut-throat competition with others. But aspiration isn’t a necessary part of intelligence, even if it provides a useful platform on which intelligence can evolve. 

Indeed, we should look forward to the day when machines can transcend mere problem solving, and become imaginative and innovative—still a long long way off but surely a feature of true intelligence—because this is something humans are not very good at, and yet we will probably need it more in the coming decades than at any time in our history.