2008 : WHAT HAVE YOU CHANGED YOUR MIND ABOUT? WHY?

sherry_turkle's picture
Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology, MIT; Internet Culture Researcher; Author, The Empathy Diaries
What I've Changed My Mind About

Throughout my academic career – when I was studying the relationship between psychoanalysis and society and when I moved to the social and psychological studies of technology – I've seen myself as a cultural critic. I don't mention this to stress how lofty a job I put myself in, but rather that I saw the job as theoretical in its essence. Technologists designed things; I was able to offer insights about the nature of people's connections to them, the mix of feelings in the thoughts, how passions mixed with cognition. Trained in psychoanalysis, I didn't see my stance as therapeutic, but it did borrow from the reticence of that discipline. I was not there to meddle. I was there to listen and interpret. Over the past year, I've changed my mind: our current relationship with technology calls forth a more meddlesome me.

In the past, because I didn't criticize but tried to analyze, some of my colleagues found me complicit with the agenda of technology-builders. I didn't like that much, but understood that this was perhaps the price to pay for maintaining my distance, as Goldilock's wolf would say, "the better to hear them with." This year I realized that I had changed my stance. In studying reactions to advanced robots, robots that look you in the eye, remember your name, and track your motions, I found more people who were considering such robots as friends, confidants, and as they imagined technical improvements, even as lovers. I became less distanced. I began to think about technological promiscuity. Are we so lonely that we will really love whatever is put in front of us?

I kept listening for what stood behind the new promiscuity – my habit of listening didn't change – and I began to get evidence of a certain fatigue with the difficulties of dealing with people. A female graduate student came up to me after a lecture and told me that she would gladly trade in her boyfriend for a sophisticated humanoid robot as long as the robot could produce what she called "caring behavior." She told me that "she needed the feeling of civility in the house and I don't want to be alone." She said: "If the robot could provide a civil environment, I would be happy to help produce the illusion that there is somebody really with me." What she was looking for, she told me, was a "no-risk relationship" that would stave off loneliness; a responsive robot, even if it was just exhibiting scripted behavior, seemed better to her than an demanding boyfriend. I thought she was joking. She was not.

In a way, I should not have been surprised. For a decade I had studied the appeal of sociable robots. They push our Darwinian buttons. They are programmed to exhibit the kind of behavior we have come to associate with sentience and empathy, which leads us to think of them as creatures with intentions, emotions, and autonomy. Once people see robots as creatures, they feel a desire to nurture them. With this feeling comes the fantasy of reciprocation. As you begin to care for these creatures, you want them to care about you.

And yet, in the past, I had found that people approached computational intelligence with a certain "romantic reaction." Their basic position was that simulated thinking might be feeling but simulated feeling was never feeling and simulated love was never love. Now, I was hearing something new. People were more likely to tell me that human beings might be "simulating" their feelings, or as one woman put it: "How do I know that my lover is not just simulating everything he says he feels?" Everyone I spoke with was busier than ever on with their e-mail and virtual friendships. Everyone was busier than ever with their social networking and always-on/always-on-you PDAs. Someone once said that loneliness is failed solitude. Could no one stand to be alone anymore before they turned to a device? Were cyberconnections paving the way to think that a robotic one might be sufficient unto the day? I was not left contemplating the cleverness of engineering but the vulnerabilities of people.

Last spring I had a public exchange in which a colleague wrote about the "I-Thou" dyad of people and robots and I could only see Martin Buber spinning in his grave. The "I" was the person in the relationship, but how could the robot be the "Thou"? In the past, I would have approached such an interchange with discipline, interested only in the projection of feeling onto the robot. But I had taken that position when robots seemed only an evocative object for better understanding people's hopes and frustrations. Now, people were doing more than fantasizing. There was a new earnestness. They saw the robot in the wings and were excited to welcome it onstage.

It seemed no time at all that a book came out called Love and Sex with Robots and a reporter from Scientific American was interviewing me about the psychology of robot marriage. The conversation was memorable and I warned my interviewer that I would use it as data. He asked me if my opposition to people marrying robots put me in the same camp as those who oppose the marriage of lesbians or gay men. I tried to explain that just because I didn't think people could marry machines didn't mean that I didn't think that any mix of people with people was fair play. He accused me of species chauvinism. Wasn't this the kind of talk that homophobes once used, not considering gays as "real" people? Right there I changed my mind about my vocation. I changed my mind about where my energies were most needed. I was turning in my card as a cultural critic the way I had always envisaged that identity. Now I was a cultural critic. I wasn't neutral; I was very sad.