2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

richard_h_thaler's picture
Father of Behavioral Economics; Recipient, 2017 Nobel Memorial Prize in Economic Science; Director, Center for Decision Research, University of Chicago Graduate School of Business; Author, Misbehaving
Who's Afraid of Artificial Intelligence?

My brief remarks on this question are framed by two one-liners that happened to have been uttered by brilliant Israelis. The first comes my friend, colleague and mentor, Amos Tversky. When asked once what he thought about AI, Amos quipped that he did not know much about it, his specialty natural stupidity. (Before any one gets on their high horse, Amos did not actually think that people were stupid. This was a joke.)

The second joke comes from Abba Eban who was best known in the United States when he served as Israel's ambassador to the United Nations. Eban was once asked if he thought that Israel would switch to a five-day workweek. Nominally, the Israeli workweek starts on Sunday morning and goes through mid-day on Friday, though a considerable amount of the "work" that is done during those five and a half days appears to take place in coffee houses. Eban's reply to the query about a five-day workweek was: "One step at a time. First, let's start with four days, and go from there."

These jokes capture much of what I think about the risks of machines taking over important societal functions and then running amuck. Like Tversky, I know more about natural stupidity than artificial intelligence, so I have no basis for forming an opinion about whether machines can think and, if so, whether such thoughts would be dangerous to humans. I leave that debate to others. Like anyone who follows financial markets, I am aware of incidents such as the Flash Crash in 2010 where poorly designed trading algorithms caused the stock prices to fall suddenly, only to recover only a few minutes later. But this example is more an illustration of artificial stupidity than hyper intelligence. As long as humans continue to write programs, we will run the risk that some important safeguard has been omitted. So, yes, computers can screw things up, just like humans with "fat fingers" can accidently issue an erroneous buy or sell order for gigantic amounts of money.

Nevertheless, fears about computers taking over the world are premature. More disturbing to me is the stubborn reluctance in many segments of society to allow computers to take over tasks that simple models perform demonstrably better than humans. A literature that was pioneered by psychologists such as the late Robyn Dawes, finds that virtually any routine decision making task, from detecting fraud, to assessing the severity of a tumor, to hiring employees, is done better by a simple statistical model than by a leading expert in the field. Let me offer just two illustrative examples, one from human resource management and the other from the world of sports.

First let's consider the embarrassing ubiquity of job interviews as an important, often the most important, determinant of who gets hired. At the University of Chicago Booth School of Business where I teach, recruiters devote endless hours to interviewing students on campus for potential jobs, a process that is used to select the few that will be invited to visit the employer where they will undergo another extensive set of interviews. Yet research shows that interviews are nearly useless in predicting whether a job prospect will perform well on the job. Compared to a statistical model based on objective measures such as grades in courses that are relevant to the job in question, interviews primarily add noise and introduce the potential for prejudice. (Statistical models do not favor any particular alma mater or ethnic background, and cannot detect good looks.)

These facts have been known for more than four decades, but hiring practices have barely budged. The reason is simple: each of us just knows that we are the one conducting an interview, we learn a lot about the candidate. It might well be that other people are not good at this task, but not me! This illusion of learning, in direct contradiction to empirical research, means that we continue to choose employees the same way we always did. We size them up, eye to eye.

One domain where some progress has been made to adopt a more scientific approach to selecting job candidates is sports, as documented by the Michael Lewis' book and movie, Moneyball. However, it would be a mistake to think that there has been a revolution in how decisions are made in sports. It is true that most professional sports teams now hire data analysts to help them evaluate potential players, improve training techniques and devise strategies. But the final decisions about which players to draft or sign, and who to play, are still made by coaches and general managers, who tend to put more faith on their gut then the resident geek.

One example comes from American football. David Romer, an economics professor at Berkeley, published a paper in 2006 showing that teams choose to punt far too often, rather then trying to "go for it" and get a first down or score. Since the publication of his paper, his analysis has been replicated and extended with much more data, and the conclusions have been confirmed. The New York Times even offers an on-line "bot" that calculates the optimal strategy every time a team faces a fourth down situation.

So have coaches caught on? Not at all. Since Romer's paper has been published, the frequency of going for it on fourth down has been flat. Coaches, who are hired by owners, based in part on interviews, still make decisions the way they always have.

So pardon me if I do not lose sleep worrying about computers taking over the world. Let's take it one step at a time, and see if people are willing to trust them to make the easy decisions at which they are already better than humans.