All Videos

Soul of a Molecular Machine

[5.1.17]

We're at the threshold of a new age of structural biology, where these things that everybody thought were too difficult and would take decades and decades, are all cracking. Now, we're coming to pieces of the cell. The real advance is that you're going to be able to look at all these machines and large molecular complexes inside the cell. It will tell you detailed molecular organization of the cell. That's going to be a big leap, to go from molecules to cells and how cells work.

In almost every disease, there's a fundamental process that's causing the disease, either a breakdown of a process, or a hijacking of a process, or a deregulation of a process. Understanding these processes in the cell in molecular terms will give us all kinds of ways to treat disease. They'll give us new targets for drugs. They'll give us genetic understanding. The impact on medicine is going to be quite profound over the long-term.

VENKATRAMAN "VENKI" RAMAKRISHNAN is an Indian-born American and British structural biologist. He shared the 2009 Nobel Prize in Chemistry with Ada Yonath and Tom Steitz and is the current President of the Royal Society. His many scientific contributions include his work on the atomic structure of the ribosome. Venki Ramakrishnan's Edge Bio Page


Go to stand-alone video: :
 

Urban Evolution

How Species Adapt, or Don't, to City Living
[3.31.17]

We realize evolution can occur very rapidly. Yet, despite this realization, very few people have taken the next logical step to consider what's happening around us, where we live. Think about the animals that live just around you. Look out your window in your backyard. . . . All the animals living around us are facing new environments, coping with new food, new structures, new places to hide, and in many cases new temperatures. These are radically different environments. If, as we now believe, natural selection causes populations to adapt to new conditions, why shouldn't it be happening to those species living around us in the very new conditions?

JONATHAN B. LOSOS is the Monique and Philip Lehner Professor for the Study of Latin America and Professor of Organismic and Evolutionary Biology at Harvard University, and Curator in Herpetology at the Museum of Comparative Zoology. He is the author of Improbable Destinies: Fate, Chance, and the Future of EvolutionJonathan B. Losos's Edge Bio Page

 


Go to stand-alone video: :
 

Closing the Loop

[3.7.17]

Closing the loop is a phrase used in robotics. Open-loop systems are when you take an action and you can't measure the results—there's no feedback. Closed-loop systems are when you take an action, you measure the results, and you change your action accordingly. Systems with closed loops have feedback loops, they self-adjust and quickly stabilize in optimal conditions. Systems with open loops overshoot; they miss it entirely.         

CHRIS ANDERSON is the CEO of 3D Robotics and founder of DIY Drones. He is the former editor-in-chief of Wired magazine. Chris Anderson's Edge Bio Page

 


Go to stand-alone video: :
 

The Function of Reason

[2.22.17]

Contrary to the standard view of reason as a capacity that enhances the individual in his or her cognitive capacities—the standard image is of Rodin’s "Thinker," thinking on his own and discovering new ideas—what we say now is that the basic functions of reason are social. They have to do with the fact that we interact with each other’s bodies and with each other’s minds. And to interact with other’s minds is to be able to represent a representation that others have, and to have them represent our representations, and also to act on the representation of others and, in some cases, let others act on our own representations.

The kind of achievements that are often cited as the proof that reason is so superior, like scientific achievements, are not achievements of individual minds, not achievements of individual reason, they are collective achievements—typically a product of social interaction over generations. They are social, cultural products, where many minds had to interact in complex ways and progressively explore a lot of directions on which they hit not because some were more reasonable than others, but because some were luckier than others in what they hit. And then they used their reason to defend what they hit by luck. Reason is a remarkable cognitive capacity, as are so many cognitive capacities in human and animals, but it’s not a superpower.

DAN SPERBER is a Paris-based social and cognitive scientist. He holds an emeritus research professorship at the French Centre National de la Recherche Scientifique (CNRS), Paris, and he is currently at Central European University, Budapest. He is the creator, with Deirdre Wilson, of "Relevance Theory." Dan Sperber's Edge Bio Page


Go to stand-alone video: :
 

Defining Intelligence

[2.7.17]

I worked on coming up with a method of defining intelligence that would necessarily have a solution, as opposed to being necessarily unsolvable. That was this idea of bounded optimality, which, roughly speaking, says that you have a machine and the machine is finite—it has finite speed and finite memory. That means that there is only a finite set of programs that can run on that machine, and out of that finite set one or some small equivalent class of programs does better than all the others; that’s the program that we should aim for.                                 

That’s what we call the bounded optimal program for that machine and also for some class of environments that you’re intending to work in. We can make progress there because we can start with very restricted types of machines and restricted kinds of environments and solve the problem. We can say, "Here is, for that machine and this environment, the best possible program that takes into account the fact that the machine doesn’t run infinitely fast. It can only do a certain amount of computation before the world changes." 

STUART RUSSELL is a professor of computer science at UC Berkeley and coauthor (with Peter Norvig) of Artificial Intelligence: A Modern Approach. Stuart Russell's Edge Bio Page

 


Go to stand-alone video: :
 

The Mind Bleeds Into the World

[1.24.17]

Coming very soon is going to be augmented reality technology, where you see the physical world, but also virtual objects and entities that you perceive in the middle of them. We’ll put on augmented reality glasses and we’ll have augmented entities out there. My face recognition is not so great, but my augmented glasses will tell me, "Ah, that’s John Brockman." A bit of AI inside my augmented reality glasses will recognize people for me.                 

At that level, artificial intelligence will start to become an extension of my mind. I suspect before long we’re all going to become very reliant on this. I’m already very reliant on my smartphone and my computers. These things are going to become more and more ubiquitous parts of our lives. The mind starts bleeding into the world. So many parts of the world are becoming parts of our mind, and eventually we start moving towards this increasingly digital reality. And this raises the question I started with: How real is all of this?

DAVID CHALMERS is University Professor of Philosophy and Neural Science and Co-Director of the Center for Mind, Brain, and Consciousness at New York University, and also Distinguished Professor of Philosophy at the Australian National University. David Chalmers's Edge Bio Page


Go to stand-alone video: :
 

How Should a Society Be?

[12.1.16]

This is another example where AI, in this case, machine-learning methods, intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair? We have to define these terms, computationally or numerically.                                 

It’s problematic in the short term because we have no idea what we’re doing; we don’t have a way to approach that problem yet. In the slightly longer term—five or ten years—there’s a profound opportunity to come together as a polis and get precise about what we mean by justice or fairness with respect to certain protected classes. Does that mean it’s got an equal false positive rate? Does that mean it has an equal false negative rate? What is the tradeoff that we’re willing to make? What are the constraints that we want to put on this model-building process? That’s a profound question, and we haven’t needed to address it until now. There’s going to be a civic conversation in the next few years about how to make these concepts explicit.

BRIAN CHRISTIAN is the author of The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive, and coauthor (with Tom Griffiths) of Algorithms to Live By: The Computer Science of Human Decisions. Brian Christian's Edge Bio Page

 


Go to stand-alone video: :
 

Glitches

[11.21.16]

Scholars like KahnemanThaler, and folks who think about the glitches of the human mind have been interested in the kind of animal work that we do, in part because the animal work has this important window into where these glitches come from. We find that capuchin monkeys have the same glitches we've seen in humans. We've seen the standard classic economic biases that Kahneman and Tversky found in humans in capuchin monkeys, things like loss aversion and reference dependence. They have those biases in spades.

… When folks hear that I'm a psychologist who studies animals, they sometimes get confused. They wonder why I'm not in a biology department or an ecology department. My answer is always, "I'm a cognitive psychologist. Full stop." My original undergrad training was studying mental imagery with Steve Kosslyn and memory with Dan Schachter. I grew up in the information processing age, and my goal was to figure out the flowchart of the mind. I just happen to think that animals are a good way to do that, in part because they let us figure out the kinds of ways that parts of the mind dissociate. I study animals in part because I'm interested in people, but I feel like people are bad way to study people.                                                             

LAURIE R. SANTOS is a professor of psychology at Yale University and the director of its Comparative Cognition Laboratory. Laurie Santos's Edge Bio Page

 


Go to stand-alone video: :
 

The Cost of Cooperating

[11.9.16]

Why is it that we care about other people? Why do we have those feelings? Also, at a cognitive level, how is that implemented? Another way of asking this is, are we predisposed to be selfish? Do we only get ourselves to be cooperative and work for the greater good by exerting self-control and rational deliberation, overriding those selfish impulses? Or are we predisposed towards cooperating? So in these situations where it doesn't actually pay, if we stop and think about it, rationality and deliberation lead us to be selfish by overriding the impulse to be a good person and help other people. 

DAVID RAND is an associate professor of psychology, economics, and management at Yale University, and the director of Yale University’s Human Cooperation Laboratory. David Rand's Edge Bio Page


Go to stand-alone video: :
 

Engines of Evidence

[10.24.16]

A new thinking came about in the early '80s when we changed from rule-based systems to a Bayesian network. Bayesian networks are probabilistic reasoning systems. An expert will put in his or her perception of the domain. A domain can be a disease, or an oil field—the same target that we had for expert systems. 

The idea was to model the domain rather than the procedures that were applied to it. In other words, you would put in local chunks of probabilistic knowledge about a disease and its various manifestations and, if you observe some evidence, the computer will take those chunks, activate activate them when needed and compute for you the revised probabilities warranted by the new evidence.

It's an engine for evidence. It is fed a probabilistic description of the domain and, when new evidence arrives, the system just shuffles things around and gives you your revised belief in all the propositions, revised to reflect the new evidence.         

JUDEA PEARL, professor of computer science at UCLA, has been at the center of not one but two scientific revolutions. First, in the 1980s, he introduced a new tool to artificial intelligence called Bayesian networks. This probability-based model of machine reasoning enabled machines to function in a complex, ambiguous, and uncertain world. Within a few years, Bayesian networks completely overshadowed the previous rule-based approaches to artificial intelligence.

Leveraging the computational benefits of Bayesian networks, Pearl realized that the combination of simple graphical models and probability (as in Bayesian networks) could also be used to reason about cause-effect relationships. The significance of this discovery far transcends its roots in artificial intelligence. His principled, mathematical approach to causality has already benefited virtually every field of science and social science, and promises to do more when popularized. 

He is the author of HeuristicsProbabilistic Reasoning in Intelligent Systems, and Causality: Models, Reasoning, and Inference, and a winner of the Alan Turing Award. Judea Pearl's Edge Bio Page

 


Go to stand-alone video: :
 

Pages