This is another example where AI—in this case, machine-learning methods—intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair? We have to define these terms, computationally or numerically.
It’s problematic in the short term because we have no idea what we’re doing; we don’t have a way to approach that problem yet. In the slightly longer term—five or ten years—there’s a profound opportunity to come together as a polis and get precise about what we mean by justice or fairness with respect to certain protected classes. Does that mean it’s got an equal false positive rate? Does that mean it has an equal false negative rate? What is the tradeoff that we’re willing to make? What are the constraints that we want to put on this model-building process? That’s a profound question, and we haven’t needed to address it until now. There’s going to be a civic conversation in the next few years about how to make these concepts explicit.
BRIAN CHRISTIAN is the author of The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive, and coauthor (with Tom Griffiths) of Algorithms to Live By: The Computer Science of Human Decisions. Brian Christian's Edge Bio Page
Scholars like Kahneman, Thaler, and folks who think about the glitches of the human mind have been interested in the kind of animal work that we do, in part because the animal work has this important window into where these glitches come from. We find that capuchin monkeys have the same glitches we've seen in humans. We've seen the standard classic economic biases that Kahneman and Tversky found in humans in capuchin monkeys, things like loss aversion and reference dependence. They have those biases in spades.
LAURIE R. SANTOS is a professor of psychology at Yale University and the director of its Comparative Cognition Laboratory. Laurie Santos's Edge Bio Page
Why is it that we care about other people? Why do we have those feelings? Also, at a cognitive level, how is that implemented? Another way of asking this is: Are we predisposed to be selfish? Do we only get ourselves to be cooperative and work for the greater good by exerting self-control and rational deliberation, overriding those selfish impulses? Or are we predisposed towards cooperating, but in these situations where cooperation doesn't actually pay, if we stop and think about it, rationality and deliberation lead us to be selfish by overriding the impulse to be a good person and help other people?
DAVID RAND is an associate professor of psychology, economics, and management at Yale University, and the director of Yale University’s Human Cooperation Laboratory. David Rand's Edge Bio Page
“Imagine a painter who could, like Vermeer, capture the quality of light that a camera can, but with the color of paints.” — Kevin Kelly
Phaidon has just published Plant: Exploring the Botanical World, a visually stunning survey celebrating “the most beautiful and pioneering botanical images ever” from around the world across all media—from murals in ancient Greece to a Napoleonic-era rose print and cutting-edge scans. Included are botanical works by Carl Linnaeus, Leonardo da Vinci, Pierre-Joseph Redoute, Charles Darwin, Emily Dickinson, van Gogh, Georgia O’Keeffe, Ellsworth Kelly, Robert Mapplethorpe, and Edge co-founder and resident artist, Katinka Matson.
“This huge canvas by New York-based artist Katinka Matson uses magnification to emphasize the spider-like forms of petals of the spider chrysanthemum (Chrysanthemum morifolium). At the start of the 21st century Matson developed a new way of portraying flowers by using a flatbed scanner, Adobe Photoshop and an ink-jet printer. Slowly scanning the flowers captures their exact appearance, without the distortion created by a single-lens photograph.” —The GuardianHer work has been featured on Edge since 2002.
A new thinking came about in the early '80s when we changed from rule-based systems to a Bayesian network. Bayesian networks are probabilistic reasoning systems. An expert will put in his or her perception of the domain. A domain can be a disease, or an oil field—the same target that we had for expert systems.
The idea was to model the domain rather than the procedures that were applied to it. In other words, you would put in local chunks of probabilistic knowledge about a disease and its various manifestations and, if you observe some evidence, the computer will take those chunks, activate them when needed and compute for you the revised probabilities warranted by the new evidence.
It's an engine for evidence. It is fed a probabilistic description of the domain and, when new evidence arrives, the system just shuffles things around and gives you your revised belief in all the propositions, revised to reflect the new evidence.
JUDEA PEARL, professor of computer science at UCLA, has been at the center of not one but two scientific revolutions. First, in the 1980s, he introduced a new tool to artificial intelligence called Bayesian networks. This probability-based model of machine reasoning enabled machines to function in a complex, ambiguous, and uncertain world. Within a few years, Bayesian networks completely overshadowed the previous rule-based approaches to artificial intelligence.
Leveraging the computational benefits of Bayesian networks, Pearl realized that the combination of simple graphical models and probability (as in Bayesian networks) could also be used to reason about cause-effect relationships. The significance of this discovery far transcends its roots in artificial intelligence. His principled, mathematical approach to causality has already benefited virtually every field of science and social science, and promises to do more when popularized.
He is the author of Heuristics; Probabilistic Reasoning in Intelligent Systems; and Causality: Models, Reasoning, and Inference. He is the winner of the Alan Turing Award. Judea Pearl's Edge Bio Page
One of the things that has been of particular interest to me recently is how you get the connectivity amongst all of these different constituents in a city. We know that we have high-ranking elites, leaders who promote and organize the development of monumental architecture. We also know that we have vast numbers of ordinary immigrants who are coming in to take advantage of all the employment, education, and marketing and entrepreneurial opportunities of urban life.
Then you have that physical space that becomes the city. What is it that links all of these physical places together? It’s infrastructure. Infrastructure is one of the hottest topics in anthropology right now, in addition to being a hot topic with urban planners. We realize that infrastructure is not just a physical thing; it’s a social thing. You didn’t have infrastructure before cities because you don’t need a superhighway in a village. You don’t need a giant water pipe in a village because everybody just uses a bucket to get their own water. You don’t need to make a road because everyone just walks on whatever pathway they make for themselves. You don’t need a sewer system because everyone just throws their garbage out the door.
MONICA SMITH is a professor of anthropology at the University of California, Los Angeles. She holds the Navin and Pratima Doshi Chair in Indian Studies and serves as the director of the South Asian Archaeology Laboratory in the Cotsen Institute of Archaeology. Monica Smith's Edge Bio Page