Edge.org
To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.
Published on Edge.org (https://www.edge.org)

Home > Edge Master Class 2015: A Short Course in Superforecasting, Class II, Part 2

Video : Master Classes

Edge Master Class 2015: A Short Course in Superforecasting, Class II, Part 2

Tournaments have a scientific value. They help us test a lot of psychological hypotheses about the drivers of accuracy, they help us test statistical ideas; there are a lot of ideas we can test in tournaments. Tournaments have a value inside organizations and businesses. A more accurate probability helps to price options better on Wall Street, so they have value. 

I wanted to focus more on what I see as the wider societal value of tournaments and the potential value of tournaments in depolarizing unnecessarily polarizing policy debates. In short, making us more civilized. ...

There is well-developed research literature on how to measure accuracy. There is not such well-developed research literature on how to measure the quality of questions. The quality of questions is going to be absolutely crucial if we want tournaments to be able to play a role in tipping the scales of plausibility in important debates, and if you want tournaments to play a role in incentivizing people to behave more reasonably in debates. 


[47:54 minutes]

Edge Master Class 2015: A Short Course in Superforecasting, Class II, Part 2 [1]

Tournaments have a scientific value. They help us test a lot of psychological hypotheses about the drivers of accuracy, they help us test statistical ideas; there are a lot of ideas we can test in tournaments. Tournaments have a value inside organizations and businesses. A more accurate probability helps to price options better on Wall Street, so they have value. 

I wanted to focus more on what I see as the wider societal value of tournaments and the potential value of tournaments in depolarizing unnecessarily polarizing policy debates. In short, making us more civilized. ...

There is well-developed research literature on how to measure accuracy. There is not such well-developed research literature on how to measure the quality of questions. The quality of questions is going to be absolutely crucial if we want tournaments to be able to play a role in tipping the scales of plausibility in important debates, and if you want tournaments to play a role in incentivizing people to behave more reasonably in debates. 


[47:54 minutes]

PHILIP E. TETLOCK [2] is the Annenberg University Professor at the University of Pennsylvania, with appointments in Wharton, psychology and political science. He is co-leader of the Good Judgment Project [3], a multi-year forecasting study, the author of Expert Political Judgment and (with Aaron Belkin) Counterfactual Thought Experiments in World Politics, and co-author (with Dan Gardner) of Superforecasting: The Art & Science of Prediction  [4](forthcoming, September 2015). Philip Tetlock's Edge Bio Page [2].

  • John Brockman, Editor and Publisher
  • Russell Weinberger, Associate Publisher
  • Nina Stegeman, Associate Editor
 
  • Contact Info:[email protected]
  • In the News
  • Get Edge.org by email
 
Edge.org is a nonprofit private operating foundation under Section 501(c)(3) of the Internal Revenue Code.
Copyright © 2021 By Edge Foundation, Inc All Rights Reserved.

 


Links:
[1] https://www.edge.org/video/edge-master-class-2015-a-short-course-in-superforecasting-class-ii-part-2
[2] https://edge.org/memberbio/philip_tetlock
[3] http://www.goodjudgmentproject.com/
[4] http://www.amazon.com/Superforecasting-The-Art-Science-Prediction/dp/0804136696