2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK? [1]

michael_vassar's picture [5]
Co-founder and Chief Science Officer, MetaMed Research
What You Don't Think About Can Hurt You... or Be Hurt By You

 

"Think? It's not your job to think! I'll do the thinking around here."

                            —Intelligent unthinking system; addressed to intelligent thinking system.  

Machines that think are coming. Right now though, think about intelligent tools. Intelligent tools don't think. Search engines don't think. Neither do robot cars. We humans often don't think either. We usually get by, as other animals do, on autopilot. Our bosses generally don't want to see us thinking. That would make things unpredictable, and would threaten their authority. If machines replace us everywhere that we aren't thinking we're in trouble.   

Let's assume "think" refers everything humans do with brains. Experts call a machine that can "think" a General Artificial Intelligence. They agree that such a machine could drive us extinct. Extinction, however, is not the only 'Existential Risk'. In the eyes of machine superintelligence expert Nick Bostrom, director of Oxford's "Future of Humanity Institute", an 'Existential Risk' is one that can "dramatically curtail the future possibilities for the human species'. Examples of existential risk include the old stand-by, nuclear war, new concerns like runaway global warning, fringe hypotheses like hypothetical particle accelerator accidents, and the increasingly popular front-runner, General Artificial Intelligence. Over the next couple decades though, the most serious existential risks come from kinds of intelligence that don't think, and new kinds of soft-authoritarianism which may emerge in a world where most decisions are made without thinking.

Some of the things that people can do with brains are impressive, and aren't likely to be matched by software any time soon. Writing a novel, seducing a lover or building a company are far beyond the abilities of intelligent tools. So, of course, is the invention of a machine that can truly think. On the other hand, most thinking can be improved upon with thin slicing, which can be improved with procedures, which are almost never a match for algorithms. In medical diagnosis and decision making, for instance, ordinary medical judgment is improved by introducing checklists while humans with checklists are less reliable than AI systems even today. Automated nursing isn't even on the horizon, but a hospital where machines made all the decisions would be a much safer place to be a patient... and it's very hard to argue against that sort of objectivity.    

The more we leave our decisions to machines, the harder it becomes to take back control. In a world where self-driving cars are the norm, and where traffic casualties have been reduced to nearly zero as a result, it will be seen as incredible irresponsible and probably illegal for a human to drive. Might it become equally objectionable for investors to invest in businesses that depart from statistically established best-practices? For children to be educated in ways that have been determined to lead to lower life-expectancy or income? If so, will values which aren't easily represented by machines, such as a good life, tend to be replaced with correlated but distinct metrics, such as serotonin and dopamine levels. It's very easy to overlook the implicit authoritarianism that sneaks in with such interpretations of value, yet any society that pursues good outcomes has to decide how to measure the good... a problem that I think will be upon us before we have machines that think to help us to think it through.