Associate Professor of Computer Science, University of Vermont; Author, How the Body Shapes the Way We Think
I, For One

"Welcome, our new robot overlords," I will say when they arrive. As I sit here nursing a coffee, watching the snow fall outside, I daydream about the coming robot revolution. The number of news articles about robotics and AI are growing at an exponential rate, indicating that superintelligent machines will arise in a very short time period. Perhaps in 2017.

As a roboticist myself, I hope to contribute to this phase change in the history of life on Earth. The human species has recently painted itself into a corner and—global climate conferences and nuclear non-proliferation treaties notwithstanding—seems unlikely to find a way out with biological smarts alone: We’re going to need help. And the growing number of known Earth-like yet silent planets indicates that we can’t rely on alien help anytime soon. We’re going to need homegrown help. Machine help.

There is much that superintelligent machines could help us with.

Very, very slowly, some individuals in some human societies have been enlarging their circles of empathy: human rights, animal cruelty, and microaggressions are recent inventions. Taken together, they indicate that we are increasingly able to place ourselves in others’ shoes. We are able to feel what it would be like to be the target of hostility or violence. Perhaps machines will help us widen these circles. My intelligent pan may suggest sautéed veggies over the bloody steak I’m about to drop into it. A smartphone might detect cyberbullying in a photo I’m about to upload and suggest that I think about how that might make the person in the photo feel. Better yet, we could imbue machines with the goal of self preservation, mirror neurons to mentally simulate how others’ actions may endanger their own continued existence, and the ability to invert those thought processes so that they can realize how their own actions threaten the existence of others. Such machines would then develop empathy on their own. Then, driven by sympathy, they would feel compelled to teach us how to strengthen our own abilities in that regard. In short: future machines may empathize about humans’ limited powers of empathy.

The same neural machinery that enables us (if we so choose) to imagine the emotional or physical pain suffered by another also allows us to predict how our current choices will influence our future selves. This is known as prospection. But humans are also lazy; we make choices now that we come to regret later. (I’m guilty of that right now: rather than actually building our future robot overlords, I’m daydreaming about them instead.) Machines could help us here too. Imagine neural implants that can directly stimulate the pain and pleasure centers of the brain. Such a device could make you feel sick before your first bite into that bacon cheeseburger rather than after you’ve finished it. A passive aggressive comment to a colleague or loved one would result in an immediate fillip to the inside of the skull.

In the same way that machines could help us maximize our powers of empathy and prospection, they could also help us minimize our agency attribution tendencies. If you’re a furry little creature running through the forest and you see a leaf shaking near your path, it’s safer to attribute agency to the leaf’s motion than to not: Better to believe there’s a predator hiding behind the leaf than to just attribute its motion to wind. Such paranoia stands you in good Darwinian stead compared to another creature who thinks "wind" and is eaten by an actual predator. It is possible that such paranoid creatures evolved into religious humans who saw imaginary predators (i.e. gods) behind every thunderstorm and stubbed toe. But religion leads to religious wars and leaders who announce: "God made me do it." Such defenses don’t hold up well in modern, humanist societies. Perhaps machines could help us correctly interpret the causes of each and every sling and arrow of outrageous fortune that we experience in our daily lives. Did I miss my bus because I’m being punished for the fact that I didn’t call my sister yesterday? My web-enabled glasses immediately flick on to show me that bus schedules have become more erratic due to this year’s cut to my city’s public transportation budget. I relax as I start walking to the subway: it’s not my fault.

What a wonderful world it could be. But how to get there? How would the machines teach empathy, prospection, and correct agency attribution? Most likely, they would overhaul our education system. The traditional classroom setting would finally be demolished so that humans could be taught solely in the school of hard knocks: machines would engineer everyday situations (both positive and negative) from which we would draw the right conclusions. But this would take a lot of time and effort. Perhaps the machines would realize that rather than expose every human to every valuable life lesson, they could distil down a few important ones into videos or even text:

The plight of the underdog. She who bullies is eventually bullied herself. There’s just us. Do to others what… Perhaps these videos and texts could be turned into stories rather than delivered as dry treatises on morality. Perhaps they could be broken into small bite-sized chunks, provided on a daily basis. Perhaps instead of hypothetical scenarios, life lessons could be drawn from real plights suffered by real people and animals each day. Perhaps they could be broadcast at a particular time—say, 6pm and 11pm—on particular television channels, or whatever the equivalent venue is in future.

The stories would have to be changed each day to keep things fresh. They would have to be "new." And, of course, there should be many of them, drawn from all cultures, all walks of life, all kinds of people and animals, told from all kinds of angles to help different people empathize, prospect, and impute causes to effects at their own pace and in their own way. So, not "new" then, but "news."