This is another example where AI, in this case, machine-learning methods, intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair? We have to define these terms, computationally or numerically.
It’s problematic in the short term because we have no idea what we’re doing; we don’t have a way to approach that problem yet. In the slightly longer term—five or ten years—there’s a profound opportunity to come together as a polis and get precise about what we mean by justice or fairness with respect to certain protected classes. Does that mean it’s got an equal false positive rate? Does that mean it has an equal false negative rate? What is the tradeoff that we’re willing to make? What are the constraints that we want to put on this model-building process? That’s a profound question, and we haven’t needed to address it until now. There’s going to be a civic conversation in the next few years about how to make these concepts explicit.
BRIAN CHRISTIAN is the author of The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive, and coauthor (with Tom Griffiths) of Algorithms to Live By: The Computer Science of Human Decisions. Brian Christian's Edge Bio Page
A new thinking came about in the early '80s when we changed from rule-based systems to a Bayesian network. Bayesian networks are probabilistic reasoning systems. An expert will put in his or her perception of the domain. A domain can be a disease, or an oil field—the same target that we had for expert systems.
The idea was to model the domain rather than the procedures that were applied to it. In other words, you would put in local chunks of probabilistic knowledge about a disease and its various manifestations and, if you observe some evidence, the computer will take those chunks, activate activate them when needed and compute for you the revised probabilities warranted by the new evidence.
It's an engine for evidence. It is fed a probabilistic description of the domain and, when new evidence arrives, the system just shuffles things around and gives you your revised belief in all the propositions, revised to reflect the new evidence.
JUDEA PEARL, professor of computer science at UCLA, has been at the center of not one but two scientific revolutions. First, in the 1980s, he introduced a new tool to artificial intelligence called Bayesian networks. This probability-based model of machine reasoning enabled machines to function in a complex, ambiguous, and uncertain world. Within a few years, Bayesian networks completely overshadowed the previous rule-based approaches to artificial intelligence.
Leveraging the computational benefits of Bayesian networks, Pearl realized that the combination of simple graphical models and probability (as in Bayesian networks) could also be used to reason about cause-effect relationships. The significance of this discovery far transcends its roots in artificial intelligence. His principled, mathematical approach to causality has already benefited virtually every field of science and social science, and promises to do more when popularized.
He is the author of Heuristics, Probabilistic Reasoning in Intelligent Systems, and Causality: Models, Reasoning, and Inference, and a winner of the Alan Turing Award. Judea Pearl's Edge Bio Page
In artificial intelligence we need to turn back to psychology. Brute force is great. We're using it in a lot of ways like in speech recognition, license plate recognition, and for categorization, but there are still some things that people do a lot better. We should be studying human beings to understand how they do it better.
People are still much better at understanding sentences, paragraphs, books, and discourse, where there's connected prose. It's one thing to do a keyword search. You can find any sentence you want that's out there on the web by just having the right keywords, but if you want a system that could summarize an article for you in a way that you trust, we're nowhere near that. The closest thing we have to that might be Google Translate, which can translate your news story into another language, but not at a level that you trust. Again, trust is a big part of it. You would never put a legal document into Google Translate and think that the answer is correct.
GARY MARCUS, CEO and founder, Geometric Intelligence; professor of psychology, New York University; author, Guitar Zero: The New Musician and the Science of Learning. Gary Marcus's Edge Bio Page
The question is, what makes us different from all these things? What makes us different is the particulars of our history, which gives us our notions of purpose and goals. That's a long way of saying when we have the box on the desk that thinks as well as any brain does, the thing it doesn't have, intrinsically, is the goals and purposes that we have. Those are defined by our particulars—our particular biology, our particular psychology, our particular cultural history.
The thing we have to think about as we think about the future of these things is the goals. That's what humans contribute, that's what our civilization contributes—execution of those goals; that's what we can increasingly automate. We've been automating it for thousands of years. We will succeed in having very good automation of those goals. I've spent some significant part of my life building technology to essentially go from a human concept of a goal to something that gets done in the world.
There are many questions that come from this. For example, we've got these great AIs and they're able to execute goals, how do we tell them what to do?...
STEPHEN WOLFRAM, distinguished scientist, inventor, author, and business leader, is Founder & CEO, Wolfram Research; Creator, Mathematica, Wolfram|Alpha & the Wolfram Language; Author, A New Kind of Science. Stephen Wolfram's Edge Bio Page
This can't be the end of human evolution. We have to go someplace else.
It's quite remarkable. It's moved people off of personal computers. Microsoft's business, while it's a huge monopoly, has stopped growing. There was this platform change. I'm fascinated to see what the next platform is going to be. It's totally up in the air, and I think that some form of augmented reality is possible and real. Is it going to be a science-fiction utopia or a science-fiction nightmare? It's going to be a little bit of both.
JOHN MARKOFF is a Pulitzer Prize-winning journalist who covers science and technology for The New York Times. His most recent book is the forthcoming Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots. John Markoff's Edge Bio Page
The reasons why I'm engaged in trying to lower the existential risks has to do with the fact that I'm a convinced consequentialist. We have to take responsibility for modeling the consequences of our actions, and then pick the actions that yield the best outcomes. Moreover, when you start thinking about—in the pallet of actions that you have—what are the things that you should pay special attention to, one argument that can be made is that you should pay attention to areas where you expect your marginal impact to be the highest. There are clearly very important issues about inequality in the world, or global warming, but I couldn't make a significant difference in these areas.
JAAN TALLINN is a co-founder of The Centre for the Study of Existential Risk at University of Cambridge, UK as well as The Future of Life Institute in Cambridge, MA. He is also a founding engineer of Kazaa and Skype. Jaan Tallinn's Edge Bio Page
...Today, you can send a design to a fab lab and you need ten different machines to turn the data into something. Twenty years from now, all of that will be in one machine that fits in your pocket. This is the sense in which it doesn't matter. You can do it today. How it works today isn't how it's going to work in the future but you don't need to wait twenty years for it. Anybody can make almost anything almost anywhere.
...Finally, when I could own all these machines I got that the Renaissance was when the liberal arts emerged—liberal for liberation, humanism, the trivium and the quadrivium—and those were a path to liberation, they were the means of expression. That's the moment when art diverged from artisans. And there were the illiberal arts that were for commercial gain. ... We've been living with this notion that making stuff is an illiberal art for commercial gain and it's not part of means of expression. But, in fact, today, 3D printing, micromachining, and microcontroller programming are as expressive as painting paintings or writing sonnets but they're not means of expression from the Renaissance. We can finally fix that boundary between art and artisans.
...I'm happy to take claim for saying computer science is one of the worst things to happen to computers or to science because, unlike physics, it has arbitrarily segregated the notion that computing happens in an alien world.
NEIL GERSHENFELD is a Physicist and the Director of MIT's Center for Bits and Atoms. He is the author of FAB. Neil Gershenfeld's Edge Bio Page
The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture—that's been the most wealthy, prolific, and influential subculture in the technical world—that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us. ...That mythology, in turn, has spurred a reactionary, perpetual spasm from people who are horrified by what they hear. You'll have a figure say, "The computers will take over the Earth, but that's a good thing, because people had their chance and now we should give it to the machines." Then you'll have other people say, "Oh, that's horrible, we must stop these computers." Most recently, some of the most beloved and respected figures in the tech and science world, including Stephen Hawking and Elon Musk, have taken that position of: "Oh my God, these things are an existential threat. They must be stopped."
In the history of organized religion, it's often been the case that people have been disempowered precisely to serve what was perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity. ... That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allows the data schemes to operate, contributing to the fortunes of whoever runs the computers. You're saying, "Well, but they're helping the AI, it's not us, they're helping the AI." It reminds me of somebody saying, "Oh, build these pyramids, it's in the service of this deity," and, on the ground, it's in the service of an elite. It's an economic effect of the new idea. The new religious idea of AI is a lot like the economic effect of the old idea, religion.
JARON LANIER is a Computer Scientist; Musician; Author of Who Owns the Future? Jaron Lanier's Edge Bio Page
KEVIN KELLY is Senior Maverick at Wired magazine. He helped launch Wired in 1993, and served as its Executive Editor until January 1999. He is currently editor and publisher of the popular Cool Tools, True Film, and Street Use websites. His most recent books are Cool Tools, and What Technology Wants. Kevin Kelly's Edge Bio Page
I think beyond me, beyond our individual silos, to achieve prosperity and development in a place like Sierra Leone does not involve giving free devices to victims, which leads to low self-efficacy and dependence on external actors; we need to make new minds. That involves giving young people the platform to innovate, to learn from making, and to learn, and to solve very tangible problems within their communities.
DAVID MOININA SENGEH is a doctoral student at the MIT Media Lab, and a researcher in the Lab’s Biomechatronics group. David Moinina Sengeh's Edge Bio Page