Coming very soon is going to be augmented reality technology, where you see not only the physical world, but also virtual objects and entities that you perceive in the middle of them. We’ll put on augmented reality glasses and we’ll have augmented entities out there. My face recognition is not so great, but my augmented glasses will tell me, "Ah, that’s John Brockman." A bit of AI inside my augmented reality glasses will recognize people for me.
At that level, artificial intelligence will start to become an extension of my mind. I suspect before long we’re all going to become very reliant on this. I’m already very reliant on my smartphone and my computers. These things are going to become more and more ubiquitous parts of our lives. The mind starts bleeding into the world. So many parts of the world are becoming parts of our mind, and eventually we start moving towards this increasingly digital reality. And this raises the question I started with: How real is all of this?
DAVID CHALMERS is University Professor of Philosophy and Neural Science and Co-Director of the Center for Mind, Brain, and Consciousness at New York University. He is also Distinguished Professor of Philosophy at the Australian National University. David Chalmers's Edge Bio Page
Richard Dawkins' “meme” became a meme, known far beyond the scientific conversation in which it was coined. It’s one of a handful of scientific ideas that have entered the general culture, helping to clarify and inspire.
The Edge 20th Anniversary Annual Question
"WHAT SCIENTIFIC TERM OR CONCEPT OUGHT TO BE MORE WIDELY KNOWN?"
Of course, not everyone likes the idea of spreading scientific understanding. Remember what the Bishop of Birmingham’s wife is reputed to have said about Darwin’s claim that human beings are descended from monkeys: "My dear, let us hope it is not true, but, if it is true, let us hope it will not become generally known."
Of all the scientific terms or concepts that ought to be more widely known to help to clarify and inspire science-minded thinking in the general culture, none are more important than “science” itself.
Many people, even many scientists, have traditionally had a narrow view of science as controlled, replicated experiments performed in the laboratory—and as consisting quintessentially of physics, chemistry, and molecular biology. The essence of science is conveyed by its Latin etymology: scientia, meaning knowledge. The scientific method is simply that body of practices best suited for obtaining reliable knowledge. The practices vary among fields: the controlled laboratory experiment is possible in molecular biology, physics, and chemistry, but it is either impossible, immoral, or illegal in many other fields customarily considered sciences, including all of the historical sciences: astronomy, epidemiology, evolutionary biology, most of the earth sciences, and paleontology. If the scientific method can be defined as those practices best suited for obtaining knowledge in a particular field, then science itself is simply the body of knowledge obtained by those practices.
Science—that is, reliable methods for obtaining knowledge—is an essential part of psychology and the social sciences, especially economics, geography, history, and political science. Not just the broad observation-based and statistical methods of the historical sciences but also detailed techniques of the conventional sciences (such as genetics and molecular biology and animal behavior) are proving essential for tackling problems in the social sciences. Science is nothing more nor less than the most reliable way of gaining knowledge about anything, whether it be the human spirit, the role of great figures in history, or the structure of DNA.
It is in this spirit of scientia that Edge, on the occasion of its 20th anniversary, is pleased to present the Edge Annual Question 2017. Happy New Year!
—John Brockman, Editor, January 1, 2017
[206 contributors; 143,000 words:] Scott Aaronson, Anthony Aguirre, Adam Alter, Ross Anderson, Samuel Arbesman, Simon Baron-Cohen, Lisa Feldman Barrett, Thomas Bass, Nicolas Baumard, Gregory Benford, Jeremy Bernstein, Laura Betzig, Susan Blackmore, Giulio Boccaletti, Ian Bogost, Joshua Bongard, Raphael Bousso, Stewart Brand, David M. Buss, Jimena Canales, Nicholas Carr, Sean Carroll, Leo Chalupa, Ashvin Chhabra, Jaeweon Cho, Nicholas A. Christakis, Brian Christian, David Christian, George Church, Andy Clark, Gregory Cochran, Jerry A. Coyne, Helena Cronin, David Dalrymple, Richard Dawkins, Aubrey de Grey, Luca De Biase, Sarah Demers, Daniel C. Dennett, Emanuel Derman, David DeSteno, Diana Deutsch, Keith Devlin, Jared Diamond, Rolf Dobelli, Scott Draves, George Dyson, Nick Enfield, Brian Eno, Juan Enriquez, Nancy Etcoff, Dylan Evans, Daniel Everett, Christine Finn, Stuart Firestein, Helen Fisher, Tecumseh Fitch, Jessica Flack, Steve Fuller, Howard Gardner, Michael Gazzaniga, James Geary, Amanda Gefter, Neil Gershenfeld, Gerd Gigerenzer, Bruno Giussani, Nigel Goldenfeld, Dan Goleman, Beatrice Golomb, Alison Gopnik, Kurt Gray, Tom Griffiths, June Gruber, Hans Halvorson, Sam Harris, Cesar Hidalgo, Roger Highfield, W. Daniel Hillis, Michael Hochberg, Donald Hoffman, Jim Holt, Bruce Hood, Daniel Hook, John Horgan, Sabine Hossenfelder, Nicholas Humphrey, Joichi Ito, Nina Jablonski, Jennifer Jacquet, Matthew O. Jackson, Kate Jeffery, Koo Jeong A, Gordon Kane, Stuart Kauffman, Kevin Kelly, Katherine Kinzler, Gary Klein, Jon Kleinberg, Brian Knutson, Bart Kosko, Stephen Kosslyn, Kai Krause, Lawrence Krauss, Coco Krumme, Robert Kurzban, Peter Lee, Cristine Legare, Martin Lercher, Margaret Levi, Janna Levin, Daniel Lieberman, Matthew Lieberman, Andre Linde, Antony Garrett Lisi, Mario Livio, Seth Lloyd, Tania Lombrozo, Jonathan B. Losos, Ziyad Marar, John Markoff, Chiara Marletto, Barnaby Marsh, Abigail Marsh, Ursula Martin, John C. Mather, Ian McEwan, Hugo Mercier, Yuri Milner, Read Montague, Richard Muller, Priyamvada Natarajan, John Naughton, Rebecca Newberger Goldstein, Richard Nisbett, Tor Nørretranders, Michael Norton, Peter Norvig, Hans Ulrich Obrist, James J. O'Donnell, Steve Omohundro, Bruce Parker, Irene Pepperberg, Clifford Pickover, Steven Pinker, David Pizarro, Robert Plomin, Ernst Pöppel, William Poundstone, Robert Provine, Richard Prum, Matthew Putman, Steven Quartz, David Queller, Sheizaf Rafaeli, Lisa Randall, Abbas Raza, Azra Raza, Martin Rees, Diana Reiss, Siobhan Roberts, Daniel Rockmore, Andrés Roemer, Phil Rosenzweig, Carlo Rovelli, David Rowan, Doulgas Rushkoff, Paul Saffo, Eduardo Salcedo-Albarán, Buddhini Samarasinghe, Robert Sapolsky, Roger Schank, Maximilian Schich, Laurence C. Smith, Simone Schnall, Bruce Schneier, Oliver Scott Curry, Gino Segre, Charles Seife, Terrence J. Sejnowski, Eldar Shafir, Michael Shermer, Seth Shostak, Gerald Smallberg, Lee Smolin, Dan Sperber, Paul Steinhardt, Victoria Stodden, Rory Sutherland, Melanie Swan, Tim Taylor, Max Tegmark, Richard Thaler, Frank Tipler, John Tooby, Eric Topol, Barbara Tversky, Athena Vouloumanos, Adam Waytz, Eric Weinstein, Linda Wilbrecht, Frank Wilczek, Jason Wilkes, Elizabeth Wrigley-Field, Victoria Wyatt, Itai Yanai, Dustin Yellin
Pre-order. Release date: February 7, 2017
140,000 words • 584 pages
This is another example where AI—in this case, machine-learning methods—intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair? We have to define these terms, computationally or numerically.
It’s problematic in the short term because we have no idea what we’re doing; we don’t have a way to approach that problem yet. In the slightly longer term—five or ten years—there’s a profound opportunity to come together as a polis and get precise about what we mean by justice or fairness with respect to certain protected classes. Does that mean it’s got an equal false positive rate? Does that mean it has an equal false negative rate? What is the tradeoff that we’re willing to make? What are the constraints that we want to put on this model-building process? That’s a profound question, and we haven’t needed to address it until now. There’s going to be a civic conversation in the next few years about how to make these concepts explicit.
BRIAN CHRISTIAN is the author of The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive, and coauthor (with Tom Griffiths) of Algorithms to Live By: The Computer Science of Human Decisions. Brian Christian's Edge Bio Page
Scholars like Kahneman, Thaler, and folks who think about the glitches of the human mind have been interested in the kind of animal work that we do, in part because the animal work has this important window into where these glitches come from. We find that capuchin monkeys have the same glitches we've seen in humans. We've seen the standard classic economic biases that Kahneman and Tversky found in humans in capuchin monkeys, things like loss aversion and reference dependence. They have those biases in spades.
LAURIE R. SANTOS is a professor of psychology at Yale University and the director of its Comparative Cognition Laboratory. Laurie Santos's Edge Bio Page
Why is it that we care about other people? Why do we have those feelings? Also, at a cognitive level, how is that implemented? Another way of asking this is: Are we predisposed to be selfish? Do we only get ourselves to be cooperative and work for the greater good by exerting self-control and rational deliberation, overriding those selfish impulses? Or are we predisposed towards cooperating, but in these situations where cooperation doesn't actually pay, if we stop and think about it, rationality and deliberation lead us to be selfish by overriding the impulse to be a good person and help other people?
DAVID RAND is an associate professor of psychology, economics, and management at Yale University, and the director of Yale University’s Human Cooperation Laboratory. David Rand's Edge Bio Page
“Imagine a painter who could, like Vermeer, capture the quality of light that a camera can, but with the color of paints.” — Kevin Kelly
Phaidon has just published Plant: Exploring the Botanical World, a visually stunning survey celebrating “the most beautiful and pioneering botanical images ever” from around the world across all media—from murals in ancient Greece to a Napoleonic-era rose print and cutting-edge scans. Included are botanical works by Carl Linnaeus, Leonardo da Vinci, Pierre-Joseph Redoute, Charles Darwin, Emily Dickinson, van Gogh, Georgia O’Keeffe, Ellsworth Kelly, Robert Mapplethorpe, and Edge co-founder and resident artist, Katinka Matson.
“This huge canvas by New York-based artist Katinka Matson uses magnification to emphasize the spider-like forms of petals of the spider chrysanthemum (Chrysanthemum morifolium). At the start of the 21st century Matson developed a new way of portraying flowers by using a flatbed scanner, Adobe Photoshop and an ink-jet printer. Slowly scanning the flowers captures their exact appearance, without the distortion created by a single-lens photograph.” —The GuardianHer work has been featured on Edge since 2002.
A new thinking came about in the early '80s when we changed from rule-based systems to a Bayesian network. Bayesian networks are probabilistic reasoning systems. An expert will put in his or her perception of the domain. A domain can be a disease, or an oil field—the same target that we had for expert systems.
The idea was to model the domain rather than the procedures that were applied to it. In other words, you would put in local chunks of probabilistic knowledge about a disease and its various manifestations and, if you observe some evidence, the computer will take those chunks, activate them when needed and compute for you the revised probabilities warranted by the new evidence.
It's an engine for evidence. It is fed a probabilistic description of the domain and, when new evidence arrives, the system just shuffles things around and gives you your revised belief in all the propositions, revised to reflect the new evidence.
JUDEA PEARL, professor of computer science at UCLA, has been at the center of not one but two scientific revolutions. First, in the 1980s, he introduced a new tool to artificial intelligence called Bayesian networks. This probability-based model of machine reasoning enabled machines to function in a complex, ambiguous, and uncertain world. Within a few years, Bayesian networks completely overshadowed the previous rule-based approaches to artificial intelligence.
Leveraging the computational benefits of Bayesian networks, Pearl realized that the combination of simple graphical models and probability (as in Bayesian networks) could also be used to reason about cause-effect relationships. The significance of this discovery far transcends its roots in artificial intelligence. His principled, mathematical approach to causality has already benefited virtually every field of science and social science, and promises to do more when popularized.
He is the author of Heuristics; Probabilistic Reasoning in Intelligent Systems; and Causality: Models, Reasoning, and Inference. He is the winner of the Alan Turing Award. Judea Pearl's Edge Bio Page