Senior Research Fellow, Centre for Research in the Arts, Social Sciences and Humanities, University of Cambridge; Director, Wolfson College Press Fellowship Programme; Columnist, the Observer; Author, From Gutenberg to Zuckerberg
When I Say "Bruno Latour" I Don't Mean "Banana Till"

What do I think about machines that think? Well, it depends what they think about, and how well they do it. For decades I've been an acolyte of Doug Engelbart, who believed that computers were machines for augmenting human intellect. Power steering for the mind, if you like. He devoted his life to the pursuit of that dream, but it eluded him because the technology was always too crude, too stupid, too inflexible, to enable its realisation.

It still is, despite Moore's Law and the rest of it. But it's getting better, slowly. Search engines, for example, have in some cases become a workable memory prosthesis for some of us. But they're still pretty dumb. So I can't wait for the moment when I can say to my computer: "Hey, do you think that Robert Nozick's idea about how the state evolves is really an extreme case of network effects in action?" and get an answer that is approximately as good as that I can get from an average grad student at the moment.

That moment, alas, is still a long way off. Right now, I'm finding it hard to persuade my dictation software that when I say "Bruno Latour" I don't mean "Banana till" (which is what it came up with a few minutes ago). But at least 'personal assistant' app on my smartphone, knows that when I ask for the weather forecast I get the one for Cambridge UK rather than Cambridge, Mass.

But this is pathetic stuff, really, when what I crave is a machine that can function as a proper personal assistant, something that can enable me to work more effectively. Which means a machine that can think for itself. How will I know when the technology is good enough? Easy: when my artificially intelligent, thinking personal assistant can generate plausible excuses that get me out of doing what I don't want to do.

Should I be bothered by the prospect of thinking machines? Probably. Certainly Nick Bostrom thinks I should. Our focus on getting computers to exhibit human-level intelligence is, he thinks, misguided. We view machines that can pass the Turing Test as the ultimate destination of Doug Engelbart's quest. But Bostrom thinks that passing the Test is just a way-point on the road to something much more worrying. "The train," he says, "might not pause or even decelerate at Humanville Station. It is likely to swoosh right by." He's right: I should be careful what I wish for.