Glitches

Glitches

Laurie R. Santos [11.21.16]

Scholars like Kahneman, Thaler, and folks who think about the glitches of the human mind have been interested in the kind of animal work that we do, in part because the animal work has this important window into where these glitches come from. We find that capuchin monkeys have the same glitches we've seen in humans. We've seen the standard classic economic biases that Kahneman and Tversky found in humans in capuchin monkeys, things like loss aversion and reference dependence. They have those biases in spades.

LAURIE R. SANTOS is a professor of psychology at Yale University and the director of its Comparative Cognition Laboratory. Laurie Santos's Edge Bio Page

GLITCHES

The big question that we're interested in lately is what makes humans special? Why are we the only ones doing Edge interviews, using technology, and doing the stuff that no other species does? Just in the last two years we're starting to realize that there might be a particular feature about humans that makes us special, which has lots of different consequences. What makes us special is the fact that we're the only species that represents the world not just using facts. Other species seem to be bound to the facts of the world in a way that humans are not.                                 

What do I mean by this? We have perceptual systems that we use to get information about the world. We're thinking about the fact that there's stuff on this table, there are trees with leaves on them behind me, and I can use that perceptual information to figure out the facts that are out there. The way humans represent the world goes way beyond that. We can think in terms of things that are not here around us. We can think of stuff that's outside here: I could picture a hot air balloon, or a time travel device, all this stuff that's not here. That gets us beyond the scope of just thinking about the things that are in the here and now in a way that's pretty cool.                                 

We can also think about the facts of the world in the past. For instance, in winter these trees didn't look like this; there was snow all over them. We can think about the context of what it's going to look like in the future, in a few months' time. We can think about fictional versions: What if I painted all the leaves blue? I can think about facts that I have which you might not have. For instance, you're not listening, so you don't hear that there are birds out here.                                 

That we can get beyond the facts in the here and now seems relatively obvious, but more and more we're getting evidence that other animals don't seem to be able to do this. They seem to be bound to the facts of the world. This has consequences for how they reason about the past, think about the future, how they plan things, and so on. One consequence is how they think about other minds, and the second consequence is how they're affected by other minds.                                 

We started to realize that other animals might be bound to the facts of the world through some of the work that we've been doing regarding theory of mind. This is the question of how other animals think about the minds of others. Do they realize that other individuals have minds and perspectives that are different from their own? Folks have been studying this sort of thing for a long time. What folks had thought was that other species are good at thinking about whether or not other individuals have the same information that they do. In other words, they can make predictions about what other animals are thinking when those predictions are consistent with the thoughts that they have.                                 

If I'm a monkey and I know something about the world—I know there's some object on this table where I'm sitting—and you happen to be looking at it, I could make a successful prediction about whether or not you knew that information. Primates are pretty good at this. When I came to this research a couple of years ago, we knew that primates could think about what other individuals saw and knew about.                                 

Just in the last two years, we're realizing that the story seems to be a lot more complicated than that. When you probe what nonhumans are doing, they seem to be thinking in terms of the facts that they know about the world, and they can attribute to other individual facts that they themselves have. But they seem to not be able to think about facts that other individuals don't have. In other words, they can't make predictions about an individual who lacks the facts that they have. They can't think about another individual being ignorant. As they're making predictions about the things that other guys are doing in the world, they're bound to the facts of the matter. They can't get out of their own heads, to some interesting extent.                                 

How do we know that animals are tracking that other individuals have facts? One way is that we can ask them to make predictions about how other individuals are going to act, based on their facts. Here's a simple thing that we've done at our field site with rhesus monkeys. You show rhesus monkeys some event and ask them what their expectations are. You ask them how they expect other individuals to behave, based on the facts that they have.                                 

Say you're a monkey sitting there watching me, and I set up a little play for you. It's a play involving me, and I'm going to be reaching for different objects in the world. You're watching me hide a lemon—this interesting object—inside a box. What do you expect me to do? One thing I might do is reach for the lemon inside this box. That's expected, given that I was just interacting with the lemon. What might be unexpected to you is if I search for the lemon in some weird spot, if I stick the lemon in the box and then start searching in some other location. You might think that's weird because I just saw where the lemon is, I have facts about where the lemon is.                                 

We can test whether or not monkeys know that merely by measuring how long they watch this event. We literally show them this play where somebody hides a lemon and they reach in that location. That's a super boring play to the monkeys. They just don't watch for very long. In contrast, if we show them this unexpected play in which my behavior doesn't fit with the facts—I hide the object over here and search in a different location for the lemon—this seems surprising to the monkeys, and they watch for a long time.                                 

This is suggesting that the monkeys are setting up expectations about how others should behave, based on the things that those individuals see. We also know that monkeys can track the facts that other people have through the way that they themselves interact with other people. If you're a monkey and you're trying to do something sneaky or deceptive, you want to do that in a context where somebody else doesn't have facts about the world. If I'm a monkey and I'm going to steal something from you, I want you not to know that I'm sneaking up on you. I want that not to be part of the way you're representing the world.                                 

It turns out that monkeys are pretty good at this. We've set up monkeys in situations where we have them literally ripping us off. We give them tempting things to try to steal from us, and then we put them in different orientations relative to us. We either have food that the monkeys can steal that's just in front of us—I can see it—or food that's behind us or hidden from us in some way. What we find, amazingly, is that on the very first trial the monkeys are super good at this. They're good at knowing which foods they can take. They avoid taking foods that we know about.                                 

These experiments have been around for a bit. We've known for some time that monkeys are pretty good at this stuff. For a long time these experiments were interpreted not just as monkeys seeming to understand something about what we see, the facts we get from our perception, but also as evidence that monkeys know what it means for a person not to know the facts of the world. They understand something about what it means to be ignorant of this stuff.                                 

Take the case where a monkey is stealing something that I had behind me. The idea is that the monkey is thinking, a-ha, she doesn't have the same facts that I have; I can exploit that to rip her off. What we're now learning is that it doesn't seem like the monkeys can represent what it means for a person to be ignorant. They don't know what this means.                                 

How do we test whether or not the monkeys know what it means to be ignorant? We do exactly the same tests—asking the monkeys their expectations when I'm hiding objects in a box—except we do it in a slightly different way. We make what the monkeys know about the world a little bit different than that of the person who's in the play. Imagine a scenario like this: You're the monkey and you're watching me hide an object inside a box. When I'm not paying attention the object moves outside the box, and then maybe it goes back in. I've missed this fact about the world, missed the fact that the world has changed a bit. Do you, the monkey, make a prediction about what I'm going to do?                                 

If you were a human, you would say, "Well, I know that she didn't see this object go back in the box, but that's where she believed it to be before. Those are the facts that she privately has. She missed this information that I have, but I should be able to make a correct prediction." In that case, the monkeys don't make a correct prediction about what to do. It's as though as soon as my facts of the world have changed, you, the monkey, throw out all information about what's going on.                                 

Using studies like this, we're realizing that the monkeys seem to think about the world differently than we do. If they share the same information with people—their facts are another person's facts—they're very good at making predictions about what that person is going to do. They're very good at interacting with that person; they steal when they're supposed to; they're good at Machiavellian interactions. But when another individual's facts are wrong or differ from the things that the monkey himself has seen, monkeys seem to make no predictions. It's as though the full representations they have about the facts of the world are built on their own version of the world, and as soon as that changes, they can't do anything.                                 

This idea that the monkeys can't represent ignorance, that they're bound to facts, has had big implications for how animals think about other minds. It means that animals don't think about minds in the same way that we do. It also tells us something important about how we're built to represent the world. It suggests that the basic way we're tracking what other people think and know might be through this very factive system. Thinking about the facts of the world is the easy way we do it. These complicated ways that we as humans think, where we take other people's perspectives, requires something else that's different.                                 

This is important because it's telling us the basic way we make sense of the world. That might have important implications for all kinds of things, like how we make fast moral judgments, or how our communication works. We've got a cool paper in my lab in the works now with Josh Knobe and his awesome student Jonathan Phillips which asks, if we're built evolutionarily to think just about the facts of the world—that's the basic system that we share with monkeys—what does that mean in terms of thinking about human cognitive science more broadly? What does that basic thing mean about the way we make sense of the world?                                 

That we think in terms of facts, and that we're different from animals in that respect, seems to have other implications for how we think about other minds and also how we are affected by other minds. This is the stuff we've been most interested in. If we're the only species not bound to our own perspective, does that mean that we are differentially affected by other people's perspectives in ways we don't expect?                                 

We know other animals can be affected by other individuals' perspectives. In fact, there are lots of simple cases in the animal kingdom of being affected by what other individuals are doing. Think of the classic case of schooling fish. If I took an individual fish from one species and plopped it into a school of other fish that were moving in certain ways, that individual fish would start moving in exactly those same ways. We see cases like this of behavioral contagion in lots of species. That's neat because it's just a dumb mechanism that other species have to get in sync with other individuals behaviorally.

Humans, of course, also have this mechanism. It's the thing that my colleague John Bargh calls the chameleon effect. As you're watching this video, if I'm starting to sit this way, you might not realize it but you would be moving in certain ways that I do. We know from work done by John's lab and others' labs that people are spontaneously copying behaviors they see in others. This is a mechanism that we see in animals and in us. It's a powerful way to get in behavioral sync with other individuals, which also leads to getting in emotional sync. If I'm making a very furrowed, sad face, or a happy face, and if you're copying those kinds of behaviors that I'm doing, you might be taking on the emotions as well.                                 

It turns out that this happens all the time. If I were to start yawning in this video, this interview would seem boring because you would start yawning and have the emotions associated with it. If I were laughing a lot in the video, things would start to seem funny to you because you're copying these behaviors of laughter, too. This is why unfunny sitcoms can seem funny when they have a laugh track. Just hearing people laughing makes you laugh a little bit, and that allows you to have this contagion of emotions.                                 

Why is this stuff relevant? Lots of species have this same form of contagion. They have a way to get their emotions into the head of another individual through these simple behaviors. Just as fish and lots of species have behavioral contagion, so too are we learning that other species have emotional contagion. You can see things like yawning contagion in chimpanzees and in dogs. You even see things like laughter contagion in chimpanzees. Chimps laugh, for those that don't know. They do these breathy gasps. If you watch tracks of chimps interacting with one another, when they see one individual laugh they are statistically more likely to laugh themselves. There should be a nice market for terrible laugh tracks in chimpanzees. Some bad sitcom can exploit this.                                 

The point of this is that there are old mechanisms for having the content in other people's heads get into your head. But all those are based on the facts of the world, based on things that you can see, not on the kinds of mental contagions that might be more interesting. How do I get my beliefs into your head? How do I get my ideas into your head? Those types of contagion might be unique to humans. If other animals can't ever get outside of their own facts that they have, how can they take on ideas that other individuals have?                                 

What we're learning is that there are a few cases where humans seem to be special, in terms of taking on other people's ideas. But that specialness is evolutionarily new in a way that might not be so good. It's like in beta testing; it works in way that preys on these new systems, systems we don't share with primates, and they might be a little glitchy. One of the glitches we've been studying is one that comes from taking on other people's perspectives when you're not supposed to. When we explicitly tell you, for example, not to do that.                                 

Here's the task. I'm going to flash a bunch of dot behind my head as we're watching this video. Your task is to say the number of dots that are there as quickly as possible. This is a boring task that vision scientists make undergrads do all the time. People are good at detecting the number of dots that are flashed. But imagine, incidentally, if I just happened to be in the frame as we do this setup. I'm flashing these dots in the video, and I happen to be here looking at some dots and not others. If you were to flash two dots—pretend my hands are the two dots—maybe one is over here and I'm seeing it and not the other. That's irrelevant to your task, but it turns out that doing this—having a person with a perspective that's different from your own in this situation—affects how quickly you answer that question. You're slower and more error-prone to say that you see two objects in a scene where I'm actually seeing one. My perspective is automatically affecting yours.

This is some work that Ian Apperly and his colleagues had done a few years ago, and there've been a lot of folks following up on it. It's not just cool in that it's a form of us taking on other people's perspectives when we're not supposed to; it seems to be, interestingly, bound to whose perspective you have to take on, or whose perspective you're not supposed to take on.                                 

My student Lindsey Drayton, in collaboration with Yarrow Dunham and I, have been doing a study to try to see whether or not that person who's in this frame affects whether or not you show this effect. Imagine it's not me in this frame; it's a person that you know super well, a good friend of yours, or person you see as ingroup. Maybe it's a person who's the same race as you are or a different race. We're finding that a lot of these things seem to matter for this effect.                                 

If you test white participants with a white avatar, they automatically take on the perspective of the white avatar. They're much slower to say that they see two objects when the white avatar is seeing one. But if you test white participants and you show them a black avatar, all of a sudden that effect goes away. The amazing thing is that it's not some dumb thing about the visual system; it's our mind and the mechanisms we're using to take on other people's perspectives. We're doing it automatically, under the hood, in ways we don't realize. At least in this case, it messes us up.                                 

That dot task is the kind of task you think is just a dumb trick of cognitive science; you wouldn't get messed up by other people's perspectives in the real world. But in the past couple of years, there are cases where we see people getting messed up by other people's perspectives in situations that matter.                                 

One situation that matters is in the context of problem solving. I give you a problem and you have to come up with your own ideas about how to solve it. We're smart humans with causal understanding, so we're pretty good at this. But what if I put you in a situation where before you get to tackle the problem yourself you watch somebody solve the problem in an inefficient way, in a way that's causally implausible. If you do that kind of study with human kids, like four-year-old human kids, or even adult humans, what you find is that watching somebody else solve a problem badly messes up the extent to which you can find the correct solution, even if ahead of time you would have done it perfectly on your own.                                 

This is a phenomenon that researchers have called over-imitation. It's a phenomenon where you imitate too much. This is a case where you're not supposed to be copying somebody's idea, but just witnessing their idea is messing up your own representation of how to solve this task. My colleagues at Yale, a student Derek Lyons and professor Frank Keil, have been studying this phenomenon of over-imitation, particularly in kids. What Derek finds is that not only do kids mess up their solution when they see somebody solve something inefficiently, but it also messes up their causal understanding of the problem. You can interview kids later and ask if they had to do that dumb thing to solve the box. Kids will spin a story not just about how they had to do it but why it was causally relevant. It changes their causal understanding of the box to see somebody do something in a dumb way.                                 

This phenomenon of over-imitation is related to the fact that we're so good at jumping out of the facts of the world and taking on other people's perspectives. This is a new evolutionary development. It means that we haven't beta tested the mechanisms that we have for jumping into someone else's perspective. What we're finding is that they're much more pervasive than we thought and much more fluid than we thought. Even when we don't think other people's ideas are affecting us, they seem to be.

In the past few weeks, we have published a recent paper that looks at whether other animals show these effects of over-imitation. Do they also get messed up by the perspectives of others? You might think that they do. They're much worse at causal learning than we are. Are they going to fall prey to other animals' bad strategies? Well, it turns out that both in chimpanzees and, in the context of our lab, domesticated dogs, you find that other animals do not get messed up by the bad solutions of other people. You can test chimpanzees and dogs on exactly that same problem-solving task you would give humans. What you find is that they can watch somebody solve a task inefficiently, and then they'll just go back to their own solution and figure it out on their own.                                 

This means that other animals, in part because they may be bound to their own facts of the world, are not messed up by the perspectives of others. That gives them a leg up. They don't have this easy mechanism to get contents into other people's minds. But that also means they don't get messed up when the contents of other people's minds are bad. This is a case where not being bound to the facts affects us in our real behaviors in the world and how we solve problems.                                 

There's evidence in social psychology more broadly that we're affected by other people's representations of the world beyond problem solving—our real-world behaviors and even our moral behaviors. By this, I'm thinking of some of the classic work by Bob Cialdini and his colleagues. He shows that merely hearing that other people are doing something and behaving in certain ways causes us to behave in those ways, even if ahead of time we would have said that was a morally bad thing to do.                                 

Cialdini has a classic case that he experienced when he was visiting the Petrified Forest National Park in Arizona. Around the forest there are lots of signs saying: Every year millions of people steal pieces of the Petrified Forest, and it destroys the forest. Please don't do that. Based on Cialdini's work in social psychology, he realized that this is a bad sign to have. If people are affected by the kinds of behaviors that other people are showing, and ideas that other people are having, then they might end up contagiously taking on these behaviors that you're trying to get people not to take on.                                 

Cialdini thought this was terrible. He wanted to do a study in which he put up that sign and then put up a different sign that conveyed the same message, but which didn't have any bad behaviors that could contagiously affect the people who were walking around the forest. So they put up a sign: Lots of people are doing this and it's really bad. Don't do it. Then they put up a different sign: Everyone who walks around the forest understands that we need to preserve it. Please don't take artifacts. The difference being that the second sign didn't mention the fact that lots of people are doing this bad thing. He put both signs up and tagged the different pieces of the petrified forest so he could later count how many were taken. He saw a many-fold difference when the first sign was used. More people were taking the artifacts than in the case of the second sign.                                 

This kind of behavior, setting up these descriptive norms where you're describing the bad things that people have done, lots of folks have realized that these things can be used in policy to get people to behave better. What I'm more interested in is this question of the representations. Why is it that telling us other people are behaving badly makes us behave badly? This is related to this glitch, this good thing we have as humans, to get out of our own heads and take other people's perspectives. The fact that we can do that means that we're also affected by other people’ perspectives.

It's in these contexts, particularly in the context of moral behavior, that I worry about these cases the most. If we're ready to suck in other people's ideas, perspectives, and beliefs, and if we have such a new mechanism that doesn't have a great filter on when we're doing that, then all of the behaviors we hear about and see, all the ideas we hear articulated, are affecting us more than we expect.                                 

I can't help but think of my Facebook feed. With the upcoming election, as I scroll through these posts I'm seeing lots of folks who have similar political perspectives to me, but other folks who think very different things. From my own perspective, I think some of these ideas are terrible, but the claim is that what I know about these mechanisms suggests that little inklings of this stuff might be getting in in ways that I don't expect. We need to understand these mechanisms better if we're going to put a filter on our own minds. But the beliefs and ideas that we have are more based on the facts we have about the world, as opposed to the stuff that other people believe.                                 

If I'm interested in humans and what makes us special, why do we need animals to study that? Sometimes when we do the study with animals, we get answers we don't expect; sometimes they surprise us. Sometimes we assume that we have abilities that are special and smart, and then when we compare ourselves to animals, we look dumb. Or we just assume that animals must have this and then realize they don't.                                 

Take this case in the theory of mind domain that I mentioned. Everyone had just assumed that if you had the ability to think about what some individual could see, then you must have the corresponding ability to think about what some individual can't see. They must be two sides of the same coin. But when we started doing these studies, we realized that you could be very good at making positive predictions about what it means for somebody to see something, and not have the ability to make any predictions about what it means for somebody to be ignorant.                                 

That set of findings is a good example of why we need the animals. We didn't know those abilities could dissociate until we happened to find this cool case of a nonhuman animal who had one part but not the other. In that respect, studying animals isn't about learning about animals; it's doing the classic thing that cognitive neuroscientists have been doing since the beginning of cognitive science, where they try to find a patient who has ability X but not ability Y. If we could find such a patient, then we would know those two processes were separate. It would be even better if we could find a different patient that had ability Y but not ability X. This is the classic logic of double dissociations that launched neuropsychology. It launched cognitive neuroscience and fMRI, where we dissociate parts of the brain that show one capacity but not the other.

Functionally, that's what we're doing with animals. We find cases where they have process X but not process Y. That tells us something about the way those processes work, unless they have to be separate, and we can expect different things about them.                                 

The second reason it makes sense to study animals is that sometimes they show us that the processing we have is not as clever, or as rich, or as System 2-ish as we once thought. This is what we've seen in the context of overimitation. We like to think that so much about problem solving is this rich causal understanding, that we're using the ideas we see in this very rational way, and thinking about costs and benefits. What the work in overimitation shows us is that we're just dumber at solving problems than other animals. We might be more bound to these fast, automatic and not very smart heuristics, like automatically taking on other people's ideas when we're solving problems, in a way that other animals don't. 

If you show chimpanzees a case where somebody is solving a puzzle box in a very inefficient way, they have some mechanism to completely ignore that and solve it on their own. If you show a four-year-old child a case where somebody is solving a puzzle box in an inefficient way, they cannot override the information that they got. The extent to which they use that automatically is so built-in that it's going to not just make the child solve the problem the wrong way, it's going to make them unable to reason about the causal structure of that task in future cases. This is remarkable.                                 

We like to think that we're less System 1-ish than other animals, but in the social domain, particularly when it comes to other people's perspectives and ideas, we might be more System 1-ish than animals. We have a set of implicit processes to take on other people's perspectives that animals don't have. It's not that they're more explicit about it. It's just that they can't represent anybody else's perspectives, so they're not messed up by them. This is a case where I think we've seen why animals are important. They tell us that the smart capacities we felt we had might not be as smart as we thought, or there might be some interesting downsides to these smart capacities that we have to better human behavior and to make sure we're behaving in ways that we're proud of.                                 

If you're the kind of person who's interested in the glitches of human behavior—I take perspective contagion as a true glitch of human behavior; it's something that messes us up all the time—you want to know something about where those glitches came from. How deep are those processes? How easy are they going to be to override? How long have they been in our history? Those are going to give you some hints about how to deal with them.                                 

Scholars like Kahneman, Thaler, and folks who think about the glitches of the human mind have been interested in the kind of animal work that we do, in part because the animal work has this important window into where these glitches come from. We find that capuchin monkeys have the same glitches we've seen in humans. We've seen the standard classic economic biases that Kahneman and Tversky found in humans in capuchin monkeys, things like loss aversion and reference dependence. They have those biases in spades.                                 

That tells us something about how those biases work. That tells us those are old biases. They're not built for current economic markets. They're not built for systems dealing with money. There's something fundamental about the way we make sense of choices in the world, and if you're going to attack them and try to override them, you have to do it in a way that's honest about the fact that those biases are going to be way too deep.                                 

If you are a Bob Cialdini and you're interested in the extent to which we get messed up by the information we hear that other people are doing, and you learn that it's just us—chimpanzees don't fall prey to that—you learn something interesting about how those biases work. This is something that we have under the hood that's operating off mechanisms that are not old, which we might be able to harness in a very different way than we would have for solving something like loss aversion.                                 

What I've found is that when the Kahnemans and the Cialdinis of the world hear about the animal work, both in cases where animals are similar to humans and in cases where animals are different, they get pretty excited. They get excited because it's telling them something, not because they care about capuchins or dogs. They get excited because they care about humans, and the animal work has allowed us to get some insight into how humans tick, particularly when it comes to their biases.                                 

When folks hear that I'm a psychologist who studies animals, they sometimes get confused. They wonder why I'm not in a biology department or an ecology department. My answer is always, "I'm a cognitive psychologist. Full stop." My original undergrad training was studying mental imagery with Steve Kosslyn and memory with Dan Schachter. I grew up in the information processing age, and my goal was to figure out the flowchart of the mind. I just happen to think that animals are a good way to do that, in part because they let us figure out the kinds of ways that parts of the mind dissociate. I study animals in part because I'm interested in people, but I feel like people are bad way to study people.                                

I remember working in Dan Schachter's lab and running tests of implicit memory as an undergrad. These are ways that you can remember things that are more implicit processes: I show you a list of words, and later you do a word scramble task. Because you've remembered them, even though I didn't tell you to do so, it's going to affect the way that you solve those problems. You're going to solve them faster for words that you saw before. You don't realize this. Nothing's explicit.                                 

Human participants have ideas about what the study is about, and they're trying to figure it out. The whole time they're running a study, they're not just being a subject in the study or being a participant, they're trying to figure out the task. Afterwards, you do this implicit memory test. You're not supposed to know what the test is about, and you ask them, and they're like, "I think this is a test about implicit memory." You have to throw that subject out, and it's just very sad.                                 

I got to the point where I hated the fact that people have this meta-awareness that they're in the study. People have the meta-awareness that they're humans, and they're constantly trying to figure stuff out. People are also messy because they have culture, they have lots of experiences, and they've been taught stuff. It's hard to get the human mind so pure. One of the reasons that my closest colleagues at Yale are people like Paul Bloom, Karen Wynn, and Frank Keil, people who study kids, is because they have to get rid of all the experiences that humans have in order to study humans. They have to knock out all the experiences that happened after six months; that's the only way we can get this pure window into what the mechanisms of the human mind are.                                 

One of the reasons I turned to nonhuman animals was to alleviate those concerns of studying humans. They just don't have meta-awareness about what we're trying to study. They're not affected by all that stuff they've learned from other individuals. They can give you this window into the mind that's purer than the kind of thing we see in human subjects.                                 

I'm a psychologist in the Developmental Psychology program. That's in part because the kinds of questions I look at, and in some ways the logic of the method I use, is very similar. People who study human babies and kids try to look at the human organism without certain human learning experiences. I'm doing the same thing, but I'm taking a mind without human unique experiences. We're getting rid of that stuff and seeing how minds that are not built to pick up on language and human culture do that.                                 

We study two such minds. One makes sense for studying the nature aspects of humans. We take the mind that's closest to us phylogenetically but lacks human uniqueness, and that's turning to nonhuman primates. We pick a nonhuman primate like a rhesus monkey, who is small and compact and we can study them. I have great respect for colleagues who study chimpanzees, which are harder to work with. Anything that we learn about all the species in the primate order is going to tell us about the closest branch to us as humans.              

We try to study primates in a relatively natural environment, which limits what we can do. It would be great to study primates purely in the wild, but then they're not as habituated to humans; we can't run experiments. I can't just watch their behavior. I need to be running them in the special kinds of studies. We use a unique population of rhesus monkeys that live on an island known as Cayo Santiago. It was set up as a research station since the 1930s. That means that the monkeys are habituated to humans; they're used us hanging around them. Even though they're living in this naturalistic setting, we can set them up on little experiments where we show them plays, or have them steal from us, and so on. It's all the benefits of working in captivity without as many of the drawbacks for the animals. We love it.

Studying rhesus monkeys gives us this nice window into the nature side of things. They're phylogenetically pretty close to us—same order of primates—and that can tell us something about the way our mind is shaped by the natural stuff, like all the genetics that we get.                                 

But on the nature/nurture balance, we also want to get insight into what we as humans get from the nurture side of things. What do we get from the fact that we live in an artifact-rich culture? What do we get from the fact that we hear human language from the day we're born, and even before? To test that, we have to find a different nonhuman species, not one that's closely related to us, but one that happens to live in these humanlike environments. That's what's caused us to turn to domesticated dogs.                                 

Domesticated dogs grow up in the same environment as a human child. If you get a puppy at the same time you have a baby, it will be growing up in the same amount of time, but the puppy will be doing things that are very different from what the human child will be doing four years later. Given that they have the same input, why? This is why we turn to dogs.                                 

If we see similarities, we can say that maybe these similarities in cognition come from living in the same environment. Maybe they come from hearing a language-rich environment, or from living in this artifact-rich culture. When we see differences, we know that just experiencing that input alone can't shape the cognitive abilities that we're talking about.

We test dogs at Yale's new Canine Cognition Center. We don't have dogs in the Center, but we have dogs in New Haven and around Connecticut. The dogs' human guardians bring them in for testing. It's very fun for the dogs because they get lots of treats and get to cuddle with all my undergrads. It's fun for their guardians because the dogs go through the Yale degree program just like the undergrads do, and they get little Yale diplomas and certificates. They get into Paw Beta Kappa, and the guardians quite love this. It's a nice system for everybody.                                 

The good thing about bringing dogs into the community is that we're testing dogs that are living in human environments. We know that those are the same environments in New Haven that Paul Bloom's children and Karen Wynn's babies are growing up in. Sometimes it's the same subjects as the children. We are an equal opportunity Canine Cognition Center, which means anyone who wants to sign up on our website with their dog can come in. The only dogs we screen for are ones who are either too aggressive to be there, and that's just for our staff's safety, or too anxious, and that's because it's not fun for the dogs.              

We have all ages of dogs, which is cool. Canine cognition ends up being one of the few fields where you can study aging in nonhuman animals. In most species, it's hard to get subject numbers to look at, say, cognitive decline, or even developmental studies. There have been some papers published on chimpanzees, but it's always a struggle to get the subject numbers you need to look for those developmental differences. We have this unique way of doing that with dogs, because there are puppies in New Haven and much older dogs in New Haven, so we can use dogs as an animal model of cognitive aging, and even cognitive decline.                                 

One of the reasons we came to dogs, beyond just the fact that they're a good model generally, is that we thought dogs would be the perfect test case for whether humans were truly unique in terms of showing this perspective contagion. One thing that any dog owner will tell you is that dogs care about what you're up to. They're constantly watching your behavior. They're constantly paying attention to the cues you give them. They were built, over domestication, to care about us, to work with us, and to cooperate with us. If any nonhuman species were going to be good at having mechanisms to catch our perspectives, it would be dogs.                                 

When we started the Canine Center, we said we were going to prove ourselves wrong, that perspective contagion is truly unique to humans. What we're going to try to do is see if dogs can do this stuff. We've now tested dogs on this overimitation study. We've presented dogs with a case where people are solving a problem in an efficient way, and we ask whether that affects them. We were so sure dogs were going to show this, we went ahead and tested non-domesticated canids. We tested dingoes, assuming the dingoes won't show it for sure and the dogs will, and we'll have this great paper.                                 

Turns out we were right. Dingoes don't show it, of course, because they're not paying attention to human behavior, but it turns out that dogs don't show this either. Dogs, when they watch an inefficient solution, are just not affected by it. They figure it out on their own. Human children in exactly the same box completely fall apart when they watch somebody do it in an inefficient way. It turns out that even your dog is smarter at filtering the bad information that he sees from other agents in the world than you are.                                 

One of the nice things about the animal cognition work is that it ends up going back and informing human cognition in rich ways. We had historically done some work on economic irrationalities in monkeys, trying to see whether the monkeys show things like loss aversion and reference dependence. The main folks that are citing those papers aren't people who study animals, they're people who study human economic biases and behavioral economics. The findings that we're getting in animals are feeding back in interesting ways to the theories that humans have about how some of these biases work.                                 

The same thing has been true in the context of overimitation in these sorts of studies, seeing differences in animals that sometimes push people forward in terms of thinking about human effects. That happens in the context of things like overimitation. It also happens in the context of things like these studies on whether or not monkeys understand what others see and know. These are papers that are being cited by experimental philosophers who are trying to understand how knowledge representations feed into our moral behaviors.                                 

Using the animals to figure out these disassociations is incredibly useful for human psychologists who are trying to figure out how these processes work. When you see that they disassociate in an animal, you can go and look at whether or not these disassociate in a human. It ends up launching interesting new work in lots of different domains that you might find surprising.                                  

In five years' time we want to better understand if we're on the right track about this puzzle of what makes humans special. The hypothesis right now is that what makes humans special is the fact that nonhuman animals can't get out of their current here and now; they can't get out of the facts of the world. That means they can't think about counterfactuals, they can't think about the future well, they can't think about the past well. They also can't take others' perspectives in the same way that we can. But, as with all these hypotheses, we could be wrong.                                 

One of the exciting things that's going on, in addition to the cool science stuff, is that I'm starting a new role at Yale. I'm going to be starting up as Yale's new head of Silliman College. Yale has a set of residential colleges, which are communities on campus of students who live together, participate in the same community, have the same academic dean, and the head of the college is like the head of the family. If you use a family analogy, I'm there to be an intellectual head, be a social head, and an athletic head, which I quite worry about, but we'll see how that goes.                                 

So far, it's been an incredible privilege. We're at an interesting time in academia, with a lot of the things that are happening both in our country and on our campuses. It's an exciting time to think about how the science of psychology can help us think about how to do better with things like micro-aggressions on campus, and how to share perspectives across relatively big dividing lines. There are lots of ways that the science of psychology can help us navigate some of those issues. My hope is that I can infuse a little bit of science into these discussions.       

My old colleague at Harvard, Dan Gilbert, once said that this is the thing that scientists stake their reputation on. They say the thing that makes humans unique is X. Then it's sad because somebody comes around and shows that X isn't true, and you staked your reputation on this dumb thing. Where I would love to be in five years' time is to either know that the X that I propose, that humans are the only species that can think in non‑factive ways, was wrong. That would be great, because we'd be moving on to a different question. Or to see that so far, all of the evidence was pretty consistent with that. Doing that will give us some important insight into this question about what makes us special, and will also tell us a lot about what makes humans a glitchier species than we expect.