2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK? [1]

douglas_coupland's picture [5]
Writer, Artist, Designer; Author; Google Artist in Residence
Humanness

Let's quickly discuss larger mammals—take dogs: we know what a dog is and we understand 'dogginess.' Look at cats: we know what cats are and what 'cattiness' is. Now take horses; suddenly it gets harder. We know what a horse is, but what is horsiness? Even my friends with horses have trouble describing horsiness to me. And now take humans: what are we? What is humanness?

It's sort of strange, but here we are, seven billion of us now, and nobody really knows the full answer to these questions, but one undeniable thing we humans do, though, is make things, and through these things we find ways of expressing humanness we didn't previously know of. The radio gave us Hitler and the Beach Boys. Barbed wire and air conditioning gave us western North America. The Internet gave us a vanishing North American middle class and kitten gifs.

People say that new technologies alienate people, but the thing is, UFOs didn't land and hand us new technologies—we made them ourselves and thus they can only ever be, well, humanating. And this is where we get to AI. People assume that AI or machines that think will have intelligence that is alien to our own, but that's not possible. In the absence of benevolent space aliens, only we humans will have created any nascent AI, and thus it can only mirror, in whatever manner, our humanness or specieshood. So when people express concern about alien intelligence or the singularity, what I think they're really expressing is angst about those unpretty parts of our collective being that currently remain unexpressed, but which will become somehow dreadfully apparent with AI.

As AI will be created by humans, its interface is going to be anthropocentric, the same as AI designed by koala bears would be koalacentric. This means AI software is going to be mankind's greatest coding kludge as we try to mold it to our species' incredibly specific needs and data. Fortunately, anything smart enough to become sentient will probably be smart enough to rewrite itself from AI into cognitive simulation, at which point our new AI could become, for better or worse, even more human. We all hope for a Jeeves & Wooster relationship without sentient machines, but we also need prepare ourselves for a Manson & Fromme relationship; they're human, too.

Personally I wonder if the software needed for AI will be able to keep pace with the hardware in which it can live. Possibly the smart thing for us to do right now would be to set up a school whose sole goal is to imbue AI with personality, ethics and compassion. It's certainly going to have enough data to work with once it's born. But how to best deploy your grade six report card, all of Banana Republic's returned merchandise data for 2037, and all of Google Books?

With the start of the Internet we mostly had people communicating with other people. As time goes by, we increasingly have people starting to communicate with machines. I know that we all get excited about AI possibly finding patterns deep within metadata, and as the push to decode these profound volumes of metadata, the Internet will become largely about machines speaking with other machines — and what they'll be talking about, of course, is us, behind our backs.