All Videos

Closing the Loop

[3.7.17]

Closing the loop is a phrase used in robotics. Open-loop systems are when you take an action and you can't measure the results—there's no feedback. Closed-loop systems are when you take an action, you measure the results, and you change your action accordingly. Systems with closed loops have feedback loops, they self-adjust and quickly stabilize in optimal conditions. Systems with open loops overshoot; they miss it entirely.         

CHRIS ANDERSON is the CEO of 3D Robotics and founder of DIY Drones. He is the former editor-in-chief of Wired magazine. Chris Anderson's Edge Bio Page

 


Go to stand-alone video: :
 

The Function of Reason

[2.22.17]

Contrary to the standard view of reason as a capacity that enhances the individual in his or her cognitive capacities—the standard image is of Rodin’s "Thinker," thinking on his own and discovering new ideas—what we say now is that the basic functions of reason are social. They have to do with the fact that we interact with each other’s bodies and with each other’s minds. And to interact with other’s minds is to be able to represent a representation that others have, and to have them represent our representations, and also to act on the representation of others and, in some cases, let others act on our own representations.

The kind of achievements that are often cited as the proof that reason is so superior, like scientific achievements, are not achievements of individual minds, not achievements of individual reason, they are collective achievements—typically a product of social interaction over generations. They are social, cultural products, where many minds had to interact in complex ways and progressively explore a lot of directions on which they hit not because some were more reasonable than others, but because some were luckier than others in what they hit. And then they used their reason to defend what they hit by luck. Reason is a remarkable cognitive capacity, as are so many cognitive capacities in human and animals, but it’s not a superpower.

DAN SPERBER is a Paris-based social and cognitive scientist. He holds an emeritus research professorship at the French Centre National de la Recherche Scientifique (CNRS), Paris, and he is currently at Central European University, Budapest. He is the creator, with Deirdre Wilson, of "Relevance Theory." Dan Sperber's Edge Bio Page


Go to stand-alone video: :
 

Defining Intelligence

[2.7.17]

I worked on coming up with a method of defining intelligence that would necessarily have a solution, as opposed to being necessarily unsolvable. That was this idea of bounded optimality, which, roughly speaking, says that you have a machine and the machine is finite—it has finite speed and finite memory. That means that there is only a finite set of programs that can run on that machine, and out of that finite set one or some small equivalent class of programs does better than all the others; that’s the program that we should aim for.                                 

That’s what we call the bounded optimal program for that machine and also for some class of environments that you’re intending to work in. We can make progress there because we can start with very restricted types of machines and restricted kinds of environments and solve the problem. We can say, "Here is, for that machine and this environment, the best possible program that takes into account the fact that the machine doesn’t run infinitely fast. It can only do a certain amount of computation before the world changes." 

STUART RUSSELL is a professor of computer science at UC Berkeley and coauthor (with Peter Norvig) of Artificial Intelligence: A Modern Approach. Stuart Russell's Edge Bio Page

 


Go to stand-alone video: :
 

The Mind Bleeds Into the World

[1.24.17]

Coming very soon is going to be augmented reality technology, where you see the physical world, but also virtual objects and entities that you perceive in the middle of them. We’ll put on augmented reality glasses and we’ll have augmented entities out there. My face recognition is not so great, but my augmented glasses will tell me, "Ah, that’s John Brockman." A bit of AI inside my augmented reality glasses will recognize people for me.                 

At that level, artificial intelligence will start to become an extension of my mind. I suspect before long we’re all going to become very reliant on this. I’m already very reliant on my smartphone and my computers. These things are going to become more and more ubiquitous parts of our lives. The mind starts bleeding into the world. So many parts of the world are becoming parts of our mind, and eventually we start moving towards this increasingly digital reality. And this raises the question I started with: How real is all of this?

DAVID CHALMERS is University Professor of Philosophy and Neural Science and Co-Director of the Center for Mind, Brain, and Consciousness at New York University, and also Distinguished Professor of Philosophy at the Australian National University. David Chalmers's Edge Bio Page


Go to stand-alone video: :
 

How Should a Society Be?

[12.1.16]

This is another example where AI, in this case, machine-learning methods, intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair? We have to define these terms, computationally or numerically.                                 

It’s problematic in the short term because we have no idea what we’re doing; we don’t have a way to approach that problem yet. In the slightly longer term—five or ten years—there’s a profound opportunity to come together as a polis and get precise about what we mean by justice or fairness with respect to certain protected classes. Does that mean it’s got an equal false positive rate? Does that mean it has an equal false negative rate? What is the tradeoff that we’re willing to make? What are the constraints that we want to put on this model-building process? That’s a profound question, and we haven’t needed to address it until now. There’s going to be a civic conversation in the next few years about how to make these concepts explicit.

BRIAN CHRISTIAN is the author of The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive, and coauthor (with Tom Griffiths) of Algorithms to Live By: The Computer Science of Human Decisions. Brian Christian's Edge Bio Page

 


Go to stand-alone video: :
 

Glitches

[11.21.16]

Scholars like KahnemanThaler, and folks who think about the glitches of the human mind have been interested in the kind of animal work that we do, in part because the animal work has this important window into where these glitches come from. We find that capuchin monkeys have the same glitches we've seen in humans. We've seen the standard classic economic biases that Kahneman and Tversky found in humans in capuchin monkeys, things like loss aversion and reference dependence. They have those biases in spades.

… When folks hear that I'm a psychologist who studies animals, they sometimes get confused. They wonder why I'm not in a biology department or an ecology department. My answer is always, "I'm a cognitive psychologist. Full stop." My original undergrad training was studying mental imagery with Steve Kosslyn and memory with Dan Schachter. I grew up in the information processing age, and my goal was to figure out the flowchart of the mind. I just happen to think that animals are a good way to do that, in part because they let us figure out the kinds of ways that parts of the mind dissociate. I study animals in part because I'm interested in people, but I feel like people are bad way to study people.                                                             

LAURIE R. SANTOS is a professor of psychology at Yale University and the director of its Comparative Cognition Laboratory. Laurie Santos's Edge Bio Page

 


Go to stand-alone video: :
 

The Cost of Cooperating

[11.9.16]

Why is it that we care about other people? Why do we have those feelings? Also, at a cognitive level, how is that implemented? Another way of asking this is, are we predisposed to be selfish? Do we only get ourselves to be cooperative and work for the greater good by exerting self-control and rational deliberation, overriding those selfish impulses? Or are we predisposed towards cooperating? So in these situations where it doesn't actually pay, if we stop and think about it, rationality and deliberation lead us to be selfish by overriding the impulse to be a good person and help other people. 

DAVID RAND is an associate professor of psychology, economics, and management at Yale University, and the director of Yale University’s Human Cooperation Laboratory. David Rand's Edge Bio Page


Go to stand-alone video: :
 

Engines of Evidence

[10.24.16]

A new thinking came about in the early '80s when we changed from rule-based systems to a Bayesian network. Bayesian networks are probabilistic reasoning systems. An expert will put in his or her perception of the domain. A domain can be a disease, or an oil field—the same target that we had for expert systems. 

The idea was to model the domain rather than the procedures that were applied to it. In other words, you would put in local chunks of probabilistic knowledge about a disease and its various manifestations and, if you observe some evidence, the computer will take those chunks, activate activate them when needed and compute for you the revised probabilities warranted by the new evidence.

It's an engine for evidence. It is fed a probabilistic description of the domain and, when new evidence arrives, the system just shuffles things around and gives you your revised belief in all the propositions, revised to reflect the new evidence.         

JUDEA PEARL, professor of computer science at UCLA, has been at the center of not one but two scientific revolutions. First, in the 1980s, he introduced a new tool to artificial intelligence called Bayesian networks. This probability-based model of machine reasoning enabled machines to function in a complex, ambiguous, and uncertain world. Within a few years, Bayesian networks completely overshadowed the previous rule-based approaches to artificial intelligence.

Leveraging the computational benefits of Bayesian networks, Pearl realized that the combination of simple graphical models and probability (as in Bayesian networks) could also be used to reason about cause-effect relationships. The significance of this discovery far transcends its roots in artificial intelligence. His principled, mathematical approach to causality has already benefited virtually every field of science and social science, and promises to do more when popularized. 

He is the author of HeuristicsProbabilistic Reasoning in Intelligent Systems, and Causality: Models, Reasoning, and Inference, and a winner of the Alan Turing Award. Judea Pearl's Edge Bio Page

 


Go to stand-alone video: :
 

Infrastructure As Dialogue

[9.20.16]

One of the things that has been of particular interest to me recently is how you get the connectivity amongst all of these different constituents in a city. We know that we have high-ranking elites, leaders who promote and organize the development of monumental architecture. We also know that we have vast numbers of ordinary immigrants who are coming in to take advantage of all the employment, education, and marketing and entrepreneurial opportunities of urban life. 

Then you have that physical space that becomes the city. What is it that links all of these physical places together? It’s infrastructure. Infrastructure is one of the hottest topics in anthropology right now, in addition to being a hot topic with urban planners. We realize that infrastructure is not just a physical thing; it’s a social thing. You didn’t have infrastructure before cities because you don’t need a superhighway in a village. You don’t need a giant water pipe in a village because everybody just uses a bucket to get their own water. You don’t need to make a road because everyone just walks on whatever pathway they make for themselves. You don’t need a sewer system because everyone just throws their garbage out the door.

MONICA SMITH is a professor of anthropology at the University of California, Los Angeles. She holds the Navin and Pratima Doshi Chair in Indian Studies and serves as the director of the South Asian Archaeology Laboratory in the Cotsen Institute of Archaeology. Monica Smith's Edge Bio Page


Go to stand-alone video: :
 

Quantum Hanky-Panky

[8.22.16]

Thinking about the future of quantum computing, I have no idea if we're going to have a quantum computer in every smart phone, or if we're going to have quantum apps or quapps, that would allow us to communicate securely and find funky stuff using our quantum computers; that's a tall order. It's very likely that we're going to have quantum microprocessors in our computers and smart phones that are performing specific tasks.

This is simply for the reason that this is where the actual technology inside our devices is heading anyway. If there are advantages to be had from quantum mechanics, then we'll take advantage of them, just in the same way that energy is moving around in a quantum mechanical kind of way in photosynthesis. If there are advantages to be had from some quantum hanky-panky, then quantum hanky‑panky it is. 

SETH LLOYD, Professor, Quantum Mechanical Engineering, MIT; Principal Investigator, Research Laboratory of Electronics; Author, Programming the UniverseSeth Lloyd's Edge Bio Page


Go to stand-alone video: :
 

Pages