How Should a Society Be?

How Should a Society Be?

Brian Christian [12.1.16]

This is another example where AI—in this case, machine-learning methods—intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair? We have to define these terms, computationally or numerically.                                 

It’s problematic in the short term because we have no idea what we’re doing; we don’t have a way to approach that problem yet. In the slightly longer term—five or ten years—there’s a profound opportunity to come together as a polis and get precise about what we mean by justice or fairness with respect to certain protected classes. Does that mean it’s got an equal false positive rate? Does that mean it has an equal false negative rate? What is the tradeoff that we’re willing to make? What are the constraints that we want to put on this model-building process? That’s a profound question, and we haven’t needed to address it until now. There’s going to be a civic conversation in the next few years about how to make these concepts explicit.

BRIAN CHRISTIAN is the author of The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive, and coauthor (with Tom Griffiths) of Algorithms to Live By: The Computer Science of Human Decisions. Brian Christian's Edge Bio Page

HOW SHOULD A SOCIETY BE?

My academic background is in computer science and philosophy. My work has been about the relationship between those two fields. What do we learn about being human by thinking about the quest to create artificial intelligence? What do we learn about human decision making by thinking of human problems in computational terms? The questions that have interested me over the years have been, on the one hand, what defines human intelligence at a species level? And secondly, at an individual level, how do we approach decision making in our own lives, and what are the problems that the world throws at us?

I find myself interested at the group level, the society level, and the civic level in a couple of different ways. In the sciences right now there’s this profound movement towards openness, reproducibility, and the sharing of models and data. I’ve been encouraged by what I’ve seen over the last few years in terms of the norms of the sciences changing. It used to be that people were scared to publish their models because that was the secret sauce; that was their advantage over other research groups.

In the field of deep learning, for example, there’s this group at Berkeley that just put their models out on the Internet. What happened was that their work became the touchstone work that everyone else was using and working in reference to. They’re recreating the social norms of the field toward a mode in which it’s considered suspect if you don’t share your models and you don’t allow others to reproduce your work. That, to me, is an encouraging trend. Where I get interested in this is applying that more to a civic space, driving towards the idea of reproducibility in journalism.   

You might see a chart in a journalistic piece that says, "This is the crime over time," or "this is the mortgage rate over time," or something like that. If you’re lucky you’ll get some line at the bottom that just says, "Source: Energy Information Administration." That might be the most that you get. There is nothing remotely approaching the level of reproducibility that the sciences hold as the ideal. There is no way for a citizen to rebuild that chart and analysis from first principles.

This is something that interests me. I’m working with a group of collaborators to see what can be done at the intersection of computer science and journalism to create ways for people to present their stories and their claims in a way that can be reproduced. This is something that I think about all the time as a citizen. I’m always reading stories and trying to scratch through the headline to the data source that’s driving the headline.

For example, I’ll see a claim that says, “San Francisco Median Rent Stabilizes Month Over Month For the First Time in X Number of Years.” It makes for a good story because there’s something of short-term interest that’s happened in some time series, but it’s rare that that story will provide me a way to see the historical data and look at the time series for myself and come to my own conclusions.

I remember in Obama’s most recent State of the Union, he was talking about the unemployment figures, and he said, “We’ve created 14 million jobs since I came into office.” This was rebutted by FactCheck.org, which stated that he was counting from the bottom of the recession and not the beginning of his term in office, and he was only talking about the private sector because the public sector is still negative. My attitude is that if the claims and the charts were being provided in a form that showed the work, we would not need a point–counterpoint in prose. I just think prose is the wrong medium for that.

One of the things I’ve been working on in the civic space is thinking about how to provide the tools for someone to make these claims in a way that’s reproducible, that includes the actual model building and the assumptions. I think of it as a problem with a pretty straightforward technical solution, but there’s also a cultural solution. I’m very encouraged, for example, at the technical level by the Jupyter Project.

The Jupyter Project, which started out of UC Berkeley, is a web platform for reproducible data analysis. You can include blocks of prose and a block of computer code that’s runnable in the browser, and it generates the figure right there. This is something that has been widely adopted in the data sciences and in the hard science community. I’m interested in bringing this to the civic space and creating a version that journalists can use that will have an impact on citizens.

When journalists do interviews, they take notes and they’ll publish certain quotes from the interview, but they have this larger set of notes that they don’t publish. In fact, they often don’t share them with their own colleagues. I see this as parallel to the problem in the hard sciences. It’s like a prisoner’s dilemma: If everyone is trying to hoard their information, then it makes sense for you to do the same, but if everyone is sharing their information, it makes sense for you to do that, too.

It’s fair to expect that we don’t trust a claim that cannot be presented in a reproducible form. If a politician sites a particular figure, it’s right to ask where they got that number. They don’t always disclose it, but it’s appropriate to create a norm where if they don’t disclose it, we treat it with total skepticism. If you’re going to use the Energy Administration figures, you have to provide some kind of Jupyter Notebook in which you say, “Okay, here’s the code by which I scraped those.”

I’m building a prototype of this with some designers and data scientists, as well as some of the people who worked in the hard sciences on these notebooks. It remains to be seen what the adoption will be, but every time I talk to journalists about this they immediately recognize the problem. People seem genuinely enthusiastic. This is probably half of what I’m working on right now.

~ ~ ~      

Academically, my background is computer science and philosophy, and I have thought of each discipline in the terms of the other. In philosophy your training is assembling and disassembling arguments: Someone lays out a case for X and you’re trained to pick that apart into its component pieces—find the weak link and attack. Having that training in parallel with being trained as a computer scientist instilled in me this fundamental belief that prose is the wrong form for a lot of the discussions that we have in civics.

I judged a debate tournament when I was in college. It was pretty eye-opening for me to think about live rhetorical argument making as being this strange genre that doesn’t have the properties you would want for civic discourse. Someone goes up and gives a ten-minute speech, and they make, let’s say, a dozen different arguments for their position. The other person then goes up to give a ten-minute counter speech, and they just choose one of the weakest of those twelve arguments and spend their entire time hammering away on it, humiliating the other person. If you’re not paying attention, you come away with the impression that the second person has won the debate when, in fact, there were eleven unanswered arguments in favor of the first group.

Someone who has been very influential in my career is Douglas Hofstadter. He writes this book in the 70s, Gödel, Escher, Bach, which is this insane genre-transcending book about number theory, music theory, and visual art history, and it ultimately makes this argument about human consciousness. Hofstadter is someone who, especially in that particular book, just rode roughshod over the division between computer science and philosophy. He’s saying that he’s going to turn to typographical number theory to make an argument about what makes human consciousness. Encountering that as a teenager was hugely influential to me. My approach to life has been driven by this conviction that there is a profound relationship that exists between the biggest philosophical questions and the formal rigor that arises out of thinking about things in mathematical or computer scientific terms.

I think of cognitive science as being one of the major estuaries, if you will, of computer science and philosophy; that’s been one of my major interests since the very beginning. Daniel Dennett is someone who has been coming up a lot for me in the other body of work that I’ve been doing, which tackles the intersection of computer science and civics in a totally different way, and this is thinking about ethical questions with respect to artificial intelligence.

Dennett has this speech from the 1980s called “The Moral First Aid Manual.” There is this major problem, in his take, within the field of ethics, which is that it assumes unbounded computational power in the person making the decision about what action is the correct good action to take. You think about this, for example, in the trolley car problem, the famous thought experiment. There’s a trolley going down the track, which can hit one person or five people. You have this lever or you can maybe throw someone in front of the train.

Many people have made the point that this is one of these very abstract things that lurches into being a practical problem when you start talking about self-driving cars. I was having a conversation with Peter Norvig at Google about self-driving cars and the trolley car problem. He made the point that, yeah, if you’re programming a self-driving car, it’s a little bit like a trolley going down the track. On one side is one person, on the other side is five people, and there’s this button that you can press—Oh, too late, you only had 50 milliseconds to make the choice. When you take computational constraints into account, it’s just a completely different problem than it is if you have this pause button on the universe where you can sit and think about the consequences.

We’re now careening into this world where these things become practical problems. Think of the way that we studied prime numbers for hundreds of years just for the pleasure in doing so, and mathematicians boasted how useless prime numbers are. Hardy said it was the most profoundly useless branch of mathematics. Then all of a sudden in the 20th century, with the advent of cryptography, it becomes extremely important. Now, the global economy is riding on these enormous prime numbers. We’re seeing the same thing happening now with these ethical thought experiments that have just festered in philosophy for decades or more, all of a sudden they’re becoming practical problems that we need to figure out some provisional answer to.

There’s a big news story in the last month or two about algorithms deciding who should get parole. We’ve trained these machine-learning models to predict recidivism. ProPublica did a big investigation and found what appeared to be a bunch of racial bias in the output of this algorithm.

This is another example where AI, in this case, machine-learning methods, intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair? We have to define these terms, computationally or numerically.

It’s problematic in the short term because we have no idea what we’re doing; we don’t have a way to approach that problem yet. In the slightly longer term—five or ten years—there’s a profound opportunity to come together as a polis and get precise about what we mean by justice or fairness with respect to certain protected classes. Does that mean it’s got an equal false positive rate? Does that mean it has an equal false negative rate? What is the tradeoff that we’re willing to make? What are the constraints that we want to put on this model-building process? That’s a profound question, and we haven’t needed to address it until now. There’s going to be a civic conversation in the next few years about how to make these concepts explicit.

To the question of civics: How should a society be? This has been one of the things in philosophy going back to the Greeks, at least, that is in a way less of an explicit concern in subsequent philosophy. In The Republic, Plato is talking about what an ideal society looks like. The society, the polis, the civic unit is less present in more recent philosophy.

For me, it feels like it is now reemerging as one of the critical spheres, especially because of the influence of AI. We now need to define these societal concepts in a very explicit way. We have a notion of what good driving looks like, we have driving tests and so forth, but if there’s only going to be a single driving algorithm that’s going to be driving millions of people for tens of millions of miles every day, we have to think a lot more carefully than we do when everyone instantiates a slightly different version of what being a good driver means.

The question of defining a society’s values and norms is going to be a crisis in the next decade or two, but I’m ultimately optimistic about that. Most AI systems operate by optimizing with respect to some objective function, and we get to decide what that objective function should be.

There are these maxims about “He who would trade liberty for safety deserves neither,” or something like that. If you look at something like algorithms for parole and probation, all of a sudden a question like that is thrown into extremely sharp relief, where someone on one hand says, “Here is this model that has an error rate of x. Here’s another algorithm that’s more fair, but it has an error rate of y.” So, statistically, on average there’s going to be more recidivism, or, we know that on average there’s going to be more crime or more mistakes as a result of following this procedure, but we also know that it’s more fair with respect to these protected classes. We as a society get to decide what the coefficients are on the competing values that we have.

There are a number of problems that are all pretty tightly bound. Concentration of power, concentration of wealth is going to be—it already is—one of the major defining problems of the next century.

Opacity is also going to be a big problem. Friends of mine who work at places like Facebook, for instance, Instagram, talk about machine-learning methods in terms that I find disconcerting. They say, “We threw a bunch of these huge machine-learning models at our newsfeed; they’re better than everything that we’ve come up with on our own, but no one really knows why and no one really knows what’s going on inside the box. We can’t turn it off because it makes us too much money.”

What I find interesting is that we have built a justice system that is in large part centered on the idea of explaining yourself. You’ve taken some action: explain yourself. What was the thought process that went into that action? I don’t have a specific case in mind, but imagine a police officer pulls a gun on a suspect and shoots, and it turns out that the suspect was unarmed. Their ability to explain the thoughts that they were having will have a huge impact on whether they spend their life in jail or get a two-week suspension.

That strikes me as pretty flawed, because a lot of these decisions happen in 50 milliseconds, so the idea that you’re going to stand in front of a jury and explain all of the factors that you weighed is bogus. The truth is, all of this stuff was happening in a completely inarticulate level. Your ability to reconstruct an explanation for that or confabulate an explanation for that will determine what happens to the rest of your life.

I’m totally with the argument that “Giant neural networks are black boxes, but so are we.” That’s true. But there’s something unsettling about this idea that if a classic “if–then–else” computer program does something, we can go back and say where things went wrong. Our entire notion of legal responsibility and justice depends on the ability to explain why something happened, so as we start to deploy these systems that basically can’t do that, or we can’t peer into them and understand why they did something, are we going to have to somehow engineer in an ability for the system to self-report?

In many cases there isn’t an answer. Why did you take this action? Well, these edges on this neural network graph were weighted this and so this thing happened. That’s not necessarily a satisfying explanation. But we have a justice system that’s dependent on motive, on intent, and all these things.

~ ~ ~                                 

I came out of this math and science and engineering magnet high school, went to Brown University, and double majored in computer science and philosophy. Along the course of my undergraduate trajectory, I decided that what I really wanted to be was a writer, so I went to graduate school and got an MFA in creative writing. I was probably one of the few people to ever go from the Computer Science Department into an MFA program.

My career after graduate school has been in writing nonfiction books. My first book was called The Most Human Human. It’s about my participation in a Turing test competition, which I’m sure many folks will be familiar with. The basic idea is that you have humans and computer programs claiming to be humans in a chat room and a panel of judges has five minutes to figure out which are the real humans and which are the imposters. I took part in the main Turing test competition in 2009, so my first book is a recounting of that experience, but more broadly, it looks at the history of artificial intelligence itself and the history of, within philosophy, how humans have answered one of the longest standing questions in philosophy: What makes humans unique and distinct and special? Going back to Descartes, back to Aristotle, philosophers have answered this question by contrasting ourselves with animals.

In many ways computer science, and AI specifically, has inverted a 2,500-year-old question, which for me is very thrilling. We now think about what defines and characterizes human intelligence by making a comparison to machines rather than to animals. It has just completely changed the framework by which we think about the human experience. So the book explores that question.

More recently, I published a book in collaboration with Tom Griffiths, a cognitive scientist at UC Berkeley and a good friend of mine, called Algorithms to Live By, which looks at human decision making through the lens of computer science. The basic idea is that there is this huge class of problems that we face in everyday life—deciding where to go out to eat, what house to live in, how to manage our limited space in our house or office, how to structure our time—that take on a particular structure as a result of our finite information, finite time, finite ability to compute.

We think of these as intrinsically human problems. They’re not. They closely parallel a set of the canonical problems in computer science. This gives us an opportunity to learn something about the problems that we confront in our own lives and how to make better decisions in our own lives by thinking about human problems through the perspective of computer science.

Gerd Gigerenzer’s work on heuristic decision making is very relevant here. One of the fundamental premises of theoretical computer science is that we can, in effect, grade problems by how hard they are. There are certain classes of so-called intractable problems, where there is no efficient way to get it answered reliably. What computer scientists do when they’re up against intractable problems is they have this huge toolkit—relaxations, regularizations, randomized algorithms, and so forth. One of the things that emerges is that, in many cases, getting an answer that’s pretty close most of the time is better than grinding your way through to the exact solution.

One of my favorite examples of this is in an area called primality testing. Most of encrypting the web relies on generating these huge prime numbers. That means that we need good algorithms for determining whether a number is prime. The algorithm that’s used today in practice is called the Miller–Rabin test, which is wrong a fourth of the time. To me this is extremely fascinating. I’ve talked with engineers at OpenSSL and people who work on encryption, and I’ve asked them how many Miller-Rabin tests is enough if they’re wrong 25 percent of the time. And the answer is, “About forty.”

I just love that we’ve had to fix the amount of certainty that we demand in this application. In this case, someone as recently as 2002 discovered a polynomial-time algorithm for testing primality, but we still use the one that’s wrong a fourth of the time; we just do it forty times.

There’s a powerful message there in thinking just at the broadest level about rationality. What does rational thought, rational decision making look like? There’s this bias we have that rational decision making means always coming up with the right answer or considering all the possibilities—following a reliable, deterministic process that’s going to give you the same definite answer every time; it’s both precise and certain.

Computer science offers us a better standard than, for example, economics, for thinking about rational decision making when you’re up against a genuinely hard problem. You don’t have the luxury of computing all possibilities, or arriving at the same solution reliably with total certainty. In this case, what emerges is a standard of rationality that is, I would say, more useful, more approachable, and more human.

Human decision making falls into a certain category of problem where we know what the optimal algorithms are for that class of problem. For example, there’s a class of problems called explore/exploit problems where you have to divide your time between gathering data and using the data that you have to get some kind of reward.

There’s a classic study from the 1970s that Amos Tversky did with a box and two different lights. One of the two lights would go on at any given press of a button with some probability, let’s say one went on 40 percent of the time and the other went on 60 percent of the time, but you don’t know that ahead of time. You’ve got a thousand opportunities either to observe which light turns on or to place a bet on which light you think will turn on, but not get to observe it and not know the result of your bet until the end of the study. This is a case where you can work out the optimal way to play this game. It turns out to be something like observe thirty-eight times in a row, then whichever light turned on more, blindly make a series of 962 bets on that light. We can say this is the optimal strategy for this game.

Then there’s the question of what people do when you give them this task. Do they do anything remotely like that? No. What they do is they observe for a little while, they put a series of five or ten bets, then they go back to observing, then they go back to betting, and so forth. And to me this is interesting.

There’s always an interesting moment where you discover a dissonance between human behavior and the prediction you’d get from the model, which is either people are just doing it wrong, or somehow the model is failing to take something into account. Maybe the people are solving a different problem than the one you’re modeling.

In this case, someone looked at the data for the study and concluded that if the rate at which these lights are turning on is on a random walk, then human behavior is actually pretty close to optimal; it’s exactly the type of thing you’d want to do. You’d want to observe for long enough to pin it down, make a series of bets until the point at which it might have drifted away, re-observe, and so forth.

In the literature this is known as the “restless bandit problem”—the idea of this process that’s on a random walk. People are presented with a static bandit, but they’re solving it as though it’s a restless bandit. They’re missing out on some payout that they could have gotten if they’d done the correct thing. We told them clearly at the start of the study that these probabilities were fixed, so why are they behaving as though they’re on a random walk? A fair answer to that question is that they’re subjects in a research study, why would they believe what you tell them? The experimenters famously lie to research subjects all the time.

In many ways it’s quite reasonable to say, “Even though I’ve been informed that these lights are on these constant probabilities, I’m going to keep checking because I don’t totally believe that.” More interestingly, the restless bandit problem is still considered an open problem. We do not have good, efficient, optimal strategies for the restless bandit problem. Not only are humans solving a subtler version of the problem than the one we were originally modeling, they’re solving a problem where we don’t have good models yet.

There is a fruitful dialogue between the lab data that psychologists and cognitive scientists are getting and the optimal strategies that computer science is discovering. Each informs the other. In my view, it reclaims a little bit of rationality for humans.

It’s easy to conduct a study like this and say that people are dumb, that they’re doing a strange thing, or an uninformed thing. But in many cases—and I think this is a good example—a closer look reveals that, in fact, people are doing an extremely subtle and, in many ways, correct thing.