Edge Video Library

Edge Master Class 2015: A Short Course in Superforecasting, Class IV

Skillful Backward and Forward Reasoning in Time: Superforecasting Requires "Counterfactualizing"
Philip Tetlock
[9.15.15]

A famous economist, Albert Hirschman, had a wonderful phrase, "self-subversion." Some people, he thought, were capable of thinking in self-subverting ways. What would a self-subverting liberal or conservative say about the Cold War? A self-subverting liberal might say, "I don’t like Reagan. I don’t think he was right, but yes, there may be some truth to the counterfactual that if he hadn’t been in power and doing what he did, the Soviet Union might still be around." A self-subverting conservative might say, "I like Reagan a lot, but it’s quite possible that the Soviet Union would have disintegrated anyway because there were lots of other forces in play."

Self-subversion is an integral part of what makes superforecasting cognition work. It’s the willingness to tolerate dissonance. It’s hard to be an extremist when you engage in self-subverting counterfactual cognition. That’s the first example. The second example deals with how regular people think about fate and how superforecasters think about it, which is, they don’t. Regular people often invoke fate, "it was meant to be," as an explanation for things.


[33:47 minutes]

PHILIP E. TETLOCK is the Annenberg University Professor at the University of Pennsylvania, with appointments in Wharton, psychology and political science. He is co-leader of the Good Judgment Project, a multi-year forecasting study, the author of Expert Political Judgment and (with Aaron Belkin) Counterfactual Thought Experiments in World Politics, and co-author (with Dan Gardner) of Superforecasting: The Art & Science of Prediction (forthcoming, September 2015). Philip Tetlock's Edge Bio Page.


 

Edge Master Class 2015: A Short Course in Superforecasting, Class III

Counterfactual History: The Elusive Control Groups in Policy Debates
Philip Tetlock
[9.1.15]

There's a picture of two people on slide seventy-two, one of whom is one of the most famous historians in the 20th century, E.H. Carr, and the other of whom is a famous economic historian at the University of Chicago, Robert Fogel. They could not have more different attitudes toward the importance of counterfactuals in history. For E.H. Carr, counterfactuals were a pestilence, they were a frivolous parlor game, a methodological rattle, a sore loser's history. It was a waste of cognitive effort to think about counterfactuals. You should think about history the way it did unfold and figure out why it had to unfold the way it did—almost a prescription for hindsight bias.

Robert Fogel, on the other hand, approached it more like a scientist. He quite correctly recognized that if you want to draw causal inferences from any historical sequence, you have to make assumptions about what would have happened if the hypothesized cause had taken on a different value. That's a counterfactual. You had this interesting tension. Many historians do still agree, in some form, with E.H. Carr. Virtually all economic historians would agree with Robert Fogel, who’s one of the pivital people in economic history; he won a Nobel Prize. But there’s this very interesting tension between people who are more open or less open to thinking about counterfactuals. Why that is, is something that is worth exploring. 


[31:07 minutes]

PHILIP E. TETLOCK is the Annenberg University Professor at the University of Pennsylvania, with appointments in Wharton, psychology and political science. He is co-leader of the Good Judgment Project, a multi-year forecasting study, the author of Expert Political Judgment and (with Aaron Belkin) Counterfactual Thought Experiments in World Politics, and co-author (with Dan Gardner) of Superforecasting: The Art & Science of Prediction (forthcoming, September 2015). Philip Tetlock's Edge Bio Page.


 

Edge Master Class 2015: A Short Course in Superforecasting, Class III

Counterfactual History: The Elusive Control Groups in Policy Debates
Philip Tetlock
[9.1.15]

There's a picture of two people on slide seventy-two, one of whom is one of the most famous historians in the 20th century, E.H. Carr, and the other of whom is a famous economic historian at the University of Chicago, Robert Fogel. They could not have more different attitudes toward the importance of counterfactuals in history. For E.H. Carr, counterfactuals were a pestilence, they were a frivolous parlor game, a methodological rattle, a sore loser's history. It was a waste of cognitive effort to think about counterfactuals. You should think about history the way it did unfold and figure out why it had to unfold the way it did—almost a prescription for hindsight bias.

Robert Fogel, on the other hand, approached it more like a scientist. He quite correctly recognized that if you want to draw causal inferences from any historical sequence, you have to make assumptions about what would have happened if the hypothesized cause had taken on a different value. That's a counterfactual. You had this interesting tension. Many historians do still agree, in some form, with E.H. Carr. Virtually all economic historians would agree with Robert Fogel, who’s one of the pivital people in economic history; he won a Nobel Prize. But there’s this very interesting tension between people who are more open or less open to thinking about counterfactuals. Why that is, is something that is worth exploring. 


[46:31 minutes]

PHILIP E. TETLOCK is the Annenberg University Professor at the University of Pennsylvania, with appointments in Wharton, psychology and political science. He is co-leader of the Good Judgment Project, a multi-year forecasting study, the author of Expert Political Judgment and (with Aaron Belkin) Counterfactual Thought Experiments in World Politics, and co-author (with Dan Gardner) of Superforecasting: The Art & Science of Prediction (forthcoming, September 2015). Philip Tetlock's Edge Bio Page.


 

Edge Master Class 2015: A Short Course in Superforecasting, Class II

Tournaments: Prying Open Closed Minds in Unnecessarily Polarized Debates
Philip Tetlock
[8.24.15]

Tournaments have a scientific value. They help us test a lot of psychological hypotheses about the drivers of accuracy, they help us test statistical ideas; there are a lot of ideas we can test in tournaments. Tournaments have a value inside organizations and businesses. A more accurate probability helps to price options better on Wall Street, so they have value. 

I wanted to focus more on what I see as the wider societal value of tournaments and the potential value of tournaments in depolarizing unnecessarily polarizing policy debates. In short, making us more civilized. ...

There is well-developed research literature on how to measure accuracy. There is not such well-developed research literature on how to measure the quality of questions. The quality of questions is going to be absolutely crucial if we want tournaments to be able to play a role in tipping the scales of plausibility in important debates, and if you want tournaments to play a role in incentivizing people to behave more reasonably in debates. 


[50:04 minutes]

PHILIP E. TETLOCK is the Annenberg University Professor at the University of Pennsylvania, with appointments in Wharton, psychology and political science. He is co-leader of the Good Judgment Project, a multi-year forecasting study, the author of Expert Political Judgment and (with Aaron Belkin) Counterfactual Thought Experiments in World Politics, and co-author (with Dan Gardner) of Superforecasting: The Art & Science of Prediction (forthcoming, September 2015). Philip Tetlock's Edge Bio Page.


 

Edge Master Class 2015: A Short Course in Superforecasting, Class II

Tournaments: Prying Open Closed Minds in Unnecessarily Polarized Debates
Philip Tetlock
[8.24.15]

Tournaments have a scientific value. They help us test a lot of psychological hypotheses about the drivers of accuracy, they help us test statistical ideas; there are a lot of ideas we can test in tournaments. Tournaments have a value inside organizations and businesses. A more accurate probability helps to price options better on Wall Street, so they have value. 

I wanted to focus more on what I see as the wider societal value of tournaments and the potential value of tournaments in depolarizing unnecessarily polarizing policy debates. In short, making us more civilized. ...

There is well-developed research literature on how to measure accuracy. There is not such well-developed research literature on how to measure the quality of questions. The quality of questions is going to be absolutely crucial if we want tournaments to be able to play a role in tipping the scales of plausibility in important debates, and if you want tournaments to play a role in incentivizing people to behave more reasonably in debates. 


[47:54 minutes]

PHILIP E. TETLOCK is the Annenberg University Professor at the University of Pennsylvania, with appointments in Wharton, psychology and political science. He is co-leader of the Good Judgment Project, a multi-year forecasting study, the author of Expert Political Judgment and (with Aaron Belkin) Counterfactual Thought Experiments in World Politics, and co-author (with Dan Gardner) of Superforecasting: The Art & Science of Prediction (forthcoming, September 2015). Philip Tetlock's Edge Bio Page.


 

Edge Master Class 2015: A Short Course in Superforecasting, Class I

Forecasting Tournaments: What We Discover When We Start Scoring Accuracy
Philip Tetlock
[8.17.15]

It is as though high status pundits have learned a valuable survival skill, and that survival skill is they've mastered the art of appearing to go out on a limb without actually going out on a limb. They say dramatic things but there are vague verbiage quantifiers connected to the dramatic things. It sounds as though they're saying something very compelling and riveting. There's a scenario that's been conjured up in your mind of something either very good or very bad. It's vivid, easily imaginable.

It turns out, on close inspection they're not really saying that's going to happen. They're not specifying the conditions, or a time frame, or likelihood, so there's no way of assessing accuracy. You could say these pundits are just doing what a rational pundit would do because they know that they live in a somewhat stochastic world. They know that it's a world that frequently is going to throw off surprises at them, so to maintain their credibility with their community of co-believers they need to be vague. It's an essential survival skill. There is some considerable truth to that, and forecasting tournaments are a very different way of proceeding. Forecasting tournaments require people to attach explicit probabilities to well-defined outcomes in well-defined time frames so you can keep score.


[39:42 minutes]

PHILIP E. TETLOCK is the Annenberg University Professor at the University of Pennsylvania, with appointments in Wharton, psychology and political science. He is co-leader of the Good Judgment Project, a multi-year forecasting study, the author of Expert Political Judgment and (with Aaron Belkin) Counterfactual Thought Experiments in World Politics, and co-author (with Dan Gardner) of Superforecasting: The Art & Science of Prediction (forthcoming, September 2015). Philip Tetlock's Edge Bio Page.


 

Edge Master Class 2015: A Short Course in Superforecasting, Class I

Forecasting Tournaments: What We Discover When We Start Scoring Accuracy
Philip Tetlock
[8.17.15]

It is as though high status pundits have learned a valuable survival skill, and that survival skill is they've mastered the art of appearing to go out on a limb without actually going out on a limb. They say dramatic things but there are vague verbiage quantifiers connected to the dramatic things. It sounds as though they're saying something very compelling and riveting. There's a scenario that's been conjured up in your mind of something either very good or very bad. It's vivid, easily imaginable.

It turns out, on close inspection they're not really saying that's going to happen. They're not specifying the conditions, or a time frame, or likelihood, so there's no way of assessing accuracy. You could say these pundits are just doing what a rational pundit would do because they know that they live in a somewhat stochastic world. They know that it's a world that frequently is going to throw off surprises at them, so to maintain their credibility with their community of co-believers they need to be vague. It's an essential survival skill. There is some considerable truth to that, and forecasting tournaments are a very different way of proceeding. Forecasting tournaments require people to attach explicit probabilities to well-defined outcomes in well-defined time frames so you can keep score.


[45:04 minutes]

PHILIP E. TETLOCK is the Annenberg University Professor at the University of Pennsylvania, with appointments in Wharton, psychology and political science. He is co-leader of the Good Judgment Project, a multi-year forecasting study, the author of Expert Political Judgment and (with Aaron Belkin) Counterfactual Thought Experiments in World Politics, and co-author (with Dan Gardner) of Superforecasting: The Art & Science of Prediction (forthcoming, September 2015). Philip Tetlock's Edge Bio Page.


 

The Next Wave

John Markoff
[7.16.15]

This can't be the end of human evolution. We have to go someplace else.                                 

It's quite remarkable. It's moved people off of personal computers. Microsoft's business, while it's a huge monopoly, has stopped growing. There was this platform change. I'm fascinated to see what the next platform is going to be. It's totally up in the air, and I think that some form of augmented reality is possible and real. Is it going to be a science-fiction utopia or a science-fiction nightmare? It's going to be a little bit of both.                              

JOHN MARKOFF is a Pulitzer Prize-winning journalist who covers science and technology for The New York Times. His most recent book is the forthcoming Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots. John Markoff's Edge Bio Page


 

The Exquisite Role of Dark Matter

Priyamvada Natarajan
[6.10.15]

It is definitely the golden age in cosmology because of this unique confluence of ideas and instruments. We live in a very peculiar universe—one that is dominated by dark matter and dark energy—the true nature of both of these remains elusive. Dark matter does not emit radiation in any wavelength and its presence is inferred by its gravitational influence on the motions of stars and gas in its vicinity. Dark Energy, discovered in 1998, meanwhile is believed to be powering the accelerated expansion of the universe. Despite not knowing what the dark matter particle is or what dark energy really is, we still have a very successful theory of how galaxies form and evolve in a universe with these mysterious and invisible dominant components. Technology has made possible the testing of our cosmological theories at a level that was unprecedented before. All of these experiments have delivered very exciting results, even if they're null results. For example, the LHC, with the discovery of the Higgs, has given us a lot more comfort in the standard model. The Planck and WMAP satellites probing the leftover hiss from the Big Bang—the cosmic microwave background radiation—have shown us that our theoretical understanding of how the early fluctuations in the universe grew and formed the late universe that we see is pretty secure. Our current theory, despite the embarrassing gap of not knowing the true nature of dark matter or dark energy, has been tested to a pretty high degree of precision.                      

It's also consequential that the dark matter direct detection experiments have not found anything. That's interesting too, because that's telling us that all these experiments are reaching the limits of their sensitivity, what they were planned for, and they're still not finding anything. This suggests paradoxically that while the overall theory might be consistent with observational data, something is still fundamentally off and possibly awry in our understanding. The challenge in the next decade is to figure out which old pieces don't fit. Is there a pattern that emerges that would tell us, is it a fundamentally new theory of gravity that's needed, or is it a complete rethink of some aspects of particle physics that are needed? Those are the big open questions.

PRIYAMVADA NATARAJAN is a professor in the Departments of Astronomy and Physics at Yale University, whose research is focused on exotica in the universe—dark matter, dark energy, and black holes. Priyamvada Natarajan's Edge Bio Page


Go to stand-alone video: :
 

Layers Of Reality

Sean Carroll
[5.28.15]

We know there's a law of nature, the second law of thermodynamics, that says that disorderliness grows with time. Is there another law of nature that governs the complexity of what happens? That talks about multiple layers of the structures and how they interact with each other? Embarrassingly enough, we don't even know how to define this problem yet. We don't know the right quantitative description for complexity. This is very early days. This is Copernicus, not even Kepler, much less Galileo or Newton. This is guessing at the ways to think about these problems.

SEAN CARROLL is a research professor at Caltech and the author of The Particle at the End of the Universe, which won the 2013 Royal Society Winton Prize, and From Eternity to Here: The Quest for the Ultimate Theory of Time. He has recently been awarded a Guggenheim Fellowship, the Gemant Award from the American Institute of Physics, and the Emperor Has No Clothes Award from the Freedom From Religion Foundation. Sean Carroll's Edge Bio Page


Go to stand-alone video: :
 

Pages