TECHNOLOGY

How Should a Society Be?

Topic: 

  • TECHNOLOGY
https://vimeo.com/190617534

This is another example where AI, in this case, machine-learning methods, intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair?

How Should a Society Be?

[12.1.16]

This is another example where AI—in this case, machine-learning methods—intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair? We have to define these terms, computationally or numerically.                                 

It’s problematic in the short term because we have no idea what we’re doing; we don’t have a way to approach that problem yet. In the slightly longer term—five or ten years—there’s a profound opportunity to come together as a polis and get precise about what we mean by justice or fairness with respect to certain protected classes. Does that mean it’s got an equal false positive rate? Does that mean it has an equal false negative rate? What is the tradeoff that we’re willing to make? What are the constraints that we want to put on this model-building process? That’s a profound question, and we haven’t needed to address it until now. There’s going to be a civic conversation in the next few years about how to make these concepts explicit.

BRIAN CHRISTIAN is the author of The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive, and coauthor (with Tom Griffiths) of Algorithms to Live By: The Computer Science of Human Decisions. Brian Christian's Edge Bio Page

Engines of Evidence

Topic: 

  • TECHNOLOGY
https://vimeo.com/181937931

A new thinking came about in the early '80s when we changed from rule-based systems to a Bayesian network. Bayesian networks are probabilistic reasoning systems. An expert will put in his or her perception of the domain. A domain can be a disease, or an oil field—the same target that we had for expert systems. 

Engines of Evidence

[10.24.16]

A new thinking came about in the early '80s when we changed from rule-based systems to a Bayesian network. Bayesian networks are probabilistic reasoning systems. An expert will put in his or her perception of the domain. A domain can be a disease, or an oil field—the same target that we had for expert systems. 

The idea was to model the domain rather than the procedures that were applied to it. In other words, you would put in local chunks of probabilistic knowledge about a disease and its various manifestations and, if you observe some evidence, the computer will take those chunks, activate them when needed and compute for you the revised probabilities warranted by the new evidence.

It's an engine for evidence. It is fed a probabilistic description of the domain and, when new evidence arrives, the system just shuffles things around and gives you your revised belief in all the propositions, revised to reflect the new evidence.         

JUDEA PEARL, professor of computer science at UCLA, has been at the center of not one but two scientific revolutions. First, in the 1980s, he introduced a new tool to artificial intelligence called Bayesian networks. This probability-based model of machine reasoning enabled machines to function in a complex, ambiguous, and uncertain world. Within a few years, Bayesian networks completely overshadowed the previous rule-based approaches to artificial intelligence.

Leveraging the computational benefits of Bayesian networks, Pearl realized that the combination of simple graphical models and probability (as in Bayesian networks) could also be used to reason about cause-effect relationships. The significance of this discovery far transcends its roots in artificial intelligence. His principled, mathematical approach to causality has already benefited virtually every field of science and social science, and promises to do more when popularized. 

He is the author of Heuristics; Probabilistic Reasoning in Intelligent Systems; and Causality: Models, Reasoning, and Inference. He is the winner of the Alan Turing Award. Judea Pearl's Edge Bio Page 

Quantum Hanky-Panky

Topic: 

  • TECHNOLOGY
https://vimeo.com/155828770

Thinking about the future of quantum computing, I have no idea if we're going to have a quantum computer in every smart phone, or if we're going to have quantum apps or quapps, that would allow us to communicate securely and find funky stuff using our quantum computers; that's a tall order. It's very likely that we're going to have quantum microprocessors in our computers and smart phones that are performing specific tasks.

Is Big Data Taking Us Closer to the Deeper Questions in Artificial Intelligence?

[5.4.16]

What we need to do in artificial intelligence is turn back to psychology. Brute force is great; we're using it in a lot of ways, like speech recognition, license plate recognition, and for categorization, but there are still some things that people do a lot better. We should be studying human beings to understand how they do it better.

People are still much better at understanding sentences, paragraphs, books, and discourse where there's connected prose. It's one thing to do a keyword search. You can find any sentence you want that's out there on the web by just having the right keywords, but if you want a system that could summarize an article for you in a way that you trust, we're nowhere near that. The closest thing we have to that might be Google Translate, which can translate your news story into another language, but not at a level that you trust. Again, trust is a big part of it. You would never put a legal document into Google Translate and think that the answer is correct.

GARY MARCUS is CEO and founder, Geometric Intelligence; professor of psychology, New York University; author, Guitar Zero: The New Musician and the Science of Learning. Gary Marcus's Edge Bio Page

Is Big Data Taking Us Closer to the Deeper Questions in Artificial Intelligence?

Topic: 

  • TECHNOLOGY
https://vimeo.com/156849301

In artificial intelligence we need to turn back to psychology. Brute force is great. We're using it in a lot of ways like in speech recognition, license plate recognition, and for categorization, but there are still some things that people do a lot better. We should be studying human beings to understand how they do it better.

AI & The Future Of Civilization

Topic: 

  • TECHNOLOGY
https://vimeo.com/153702764

The question is, what makes us different from all these things? What makes us different is the particulars of our history, which gives us our notions of purpose and goals. That's a long way of saying when we have the box on the desk that thinks as well as any brain does, the thing it doesn't have, intrinsically, is the goals and purposes that we have. Those are defined by our particulars—our particular biology, our particular psychology, our particular cultural history.

AI & The Future Of Civilization

[3.1.16]


What makes us different from all these things? What makes us different is the particulars of our history, which gives us our notions of purpose and goals. That's a long way of saying when we have the box on the desk that thinks as well as any brain does, the thing it doesn't have, intrinsically, is the goals and purposes that we have. Those are defined by our particulars—our particular biology, our particular psychology, our particular cultural history.

The thing we have to think about as we think about the future of these things is the goals. That's what humans contribute, that's what our civilization contributes—execution of those goals; that's what we can increasingly automate. We've been automating it for thousands of years. We will succeed in having very good automation of those goals. I've spent some significant part of my life building technology to essentially go from a human concept of a goal to something that gets done in the world.

There are many questions that come from this. For example, we've got these great AIs and they're able to execute goals, how do we tell them what to do?...

STEPHEN WOLFRAM, distinguished scientist, inventor, author, and business leader, is Founder & CEO, Wolfram Research; Creator, Mathematica, Wolfram|Alpha & the Wolfram Language; Author, A New Kind of Science. Stephen Wolfram's Edge Bio Page

THE REALITY CLUB: Nicholas Carr, Ed Regis

ED. NOTE: From an unsolicited email: "For me, watching the video in small bites gave me the same thrill as reading JJ Ulysses I looked at the screen and clapped aloud." 

Pages

Subscribe to RSS - TECHNOLOGY