Videos in: 2019

Humans: Doing More With Less

[7.16.19]

Imagine a superintelligent system with far more computational resources than us mere humans that’s trying to make inferences about what the humans who are surrounding it—which it thinks of as cute little pets—are trying to achieve so that it is then able to act in a way that is consistent with what those human beings might want. That system needs to be able to simulate what an agent with greater constraints on its cognitive resources should be doing, and it should be able to make inferences, like the fact that we’re not able to calculate the zeros of the Riemann zeta function or discover a cure for cancer. It doesn’t mean we’re not interested in those things; it’s just a consequence of the cognitive limitations that we have.

As a parent of two small children, this is a problem that I face all the time, which is trying to figure out what my kids want, kids who are operating in an entirely different mode of computation, and having to build a kind of internal model of how a toddler’s mind works such that it’s possible to unravel that and work out that there’s a particular motivation for the very strange pattern of actions that they’re taking.

Both from the perspective of understanding human cognition and from the perspective of being able to build AI systems that can understand human cognition, it’s desirable for us to have a better model of how rational agents should act if those rational agents have limited cognitive resources. That’s something that I’ve been working on for the last few years. We have an approach to thinking about this that we call resource rationality. And this is closely related to similar ideas that are being proposed in the artificial intelligence literature. One of these ideas is the notion of bounded optimality, proposed by Stuart Russell.

TOM GRIFFITHS is Henry R. Luce Professor of Information, Technology, Consciousness, and Culture at Princeton University. He is co-author (with Brian Christian) of Algorithms to Live By. Tom Griffiths's Edge Bio Page


Go to stand-alone video: :
 

A Separate Kind of Intelligence

[7.10.19]

Back in 1950, Turing argued that for a genuine AI we might do better by simulating a child’s mind than an adult’s. This insight has particular resonance given recent work on "life history" theory in evolutionary biology—the developmental trajectory of a species, particularly the length of its childhood, is highly correlated with adult intelligence and flexibility across a wide range of species. This trajectory is also reflected in brain development, with its distinctive transition from early proliferation to later pruning. I’ve argued that this developmental pattern reflects a distinctive evolutionary way of resolving explore-exploit tensions that bedevil artificial intelligence. Childhood allows for a protected period of broad, high-temperature search through the space of solutions and hypotheses, before the requirements of focused, goal-directed planning set in. This distinctive exploratory childhood intelligence, with its characteristic playfulness, imagination and variability, may be the key to the human ability to innovate creatively yet intelligently, an ability that is still far beyond the purview of AI. More generally, a genuine understanding of intelligence requires a developmental perspective.

ALISON GOPNIK is a developmental psychologist at UC Berkeley. Her books include The Philosophical Baby and, most recently, The Gardener and the Carpenter: What the New Science of Child Development Tells Us About the Relationship Between Parents and Children. Alison Gopnik's Edge Bio Page


Go to stand-alone video: :
 

Collaboration and the Evolution of Disciplines

[7.1.19]

Cooperation achieves its beneficial effects by improving communication, promoting gains from specialization, enhancing organizational effectiveness, and reducing the risks of harmful conflict. Members of an institutionalized academic discipline jointly benefit in all these ways. Unfortunately, members of different disciplines typically do not. The boundaries of most disciplines were largely set 100 (plus or minus 50) years ago, and efforts to redraw the boundaries (e.g. at Irvine and Carnegie Mellon) have not been met with much success. I would like us to consider how the more or less fragmented research community can best respond to new opportunities (AI), new problems (climate change), new modes of education and governance, and new understandings of human behavior and values. 

ROBERT AXELROD, Walgreen Professor for the Study of Human Understanding at the University of Michigan, is best known for his interdisciplinary work on the evolution of cooperation. He is author of The Complexity of Cooperation and The Evolution of Cooperation. Robert Axelrod's Edge Bio Pag


 

Questioning the Cranial Paradigm

[6.19.19]

Part of the definition of intelligence is always this representation model. . . . I’m pushing this idea of distribution—homeostatic surfing on worldly engagements that the body is always not only a part of but enabled by and symbiotic on. Also, the idea of adaptation as not necessarily defined by the consciousness that we like to fetishize. Are there other forms of consciousness? Here’s where the gut-brain axis comes in. Are there forms that we describe as visceral gut feelings that are a form of human consciousness that we’re getting through this immune brain?

CAROLINE A. JONES is a professor of art history in the Department of Architecture at MIT and author, most recently, of The Global Work of Art. Caroline Jones's Edge Bio Page


Go to stand-alone video: :
 

The Brain Is Full of Maps

[6.11.19]

 I was talking about maps and feelings, and whether the brain is analog or digital. I’ll give you a little bit of what I wrote:

Brains use maps to process information. Information from the retina goes to several areas of the brain where the picture seen by the eye is converted into maps of various kinds. Information from sensory nerves in the skin goes to areas where the information is converted into maps of the body. The brain is full of maps. And a big part of the activity is transferring information from one map to another.

As we know from our own use of maps, mapping from one picture to another can be done either by digital or by analog processing. Because digital cameras are now cheap and film cameras are old fashioned and rapidly becoming obsolete, many people assume that the process of mapping in the brain must be digital. But the brain has been evolving over millions of years and does not follow our ephemeral fashions. A map is in its essence an analog device, using a picture to represent another picture. The imaging in the brain must be done by direct comparison of pictures rather than by translations of pictures into digital form.

FREEMAN DYSON, emeritus professor of physics at the Institute for Advanced Study in Princeton, has worked on nuclear reactors, solid-state physics, ferromagnetism, astrophysics, and biology, looking for problems where elegant mathematics could be usefully applied. His books include Disturbing the UniverseWeapons and HopeInfinite in All Directions, and Maker of PatternsFreeman Dyson's Edge Bio Page

 


Go to stand-alone video: :
 

Perception As Controlled Hallucination

Predictive Processing and the Nature of Conscious Experience
[6.6.19]

Perception itself is a kind of controlled hallucination. . . . [T]he sensory information here acts as feedback on your expectations. It allows you to often correct them and to refine them. But the heavy lifting seems to be being done by the expectations. Does that mean that perception is a controlled hallucination? I sometimes think it would be good to flip that and just think that hallucination is a kind of uncontrolled perception. 

ANDY CLARK is professor of Cognitive Philosophy at the University of Sussex and author of Surfing Uncertainty: Prediction, Action, and the Embodied MindAndy Clark's Edge Bio Page


 

Mining the Computational Universe

[5.30.19]

I've spent several decades creating a computational language that aims to give a precise symbolic representation for computational thinking, suitable for use by both humans and machines. I'm interested in figuring out what can happen when a substantial fraction of humans can communicate in computational language as well as human language. It's clear that the introduction of both human spoken language and human written language had important effects on the development of civilization. What will now happen (for both humans and AI) when computational language spreads?

STEPHEN WOLFRAM is a scientist, inventor, and the founder and CEO of Wolfram Research. He is the creator of the symbolic computation program Mathematica and its programming language, Wolfram Language, as well as the knowledge engine Wolfram|Alpha. He is also the author of A New Kind of Science. Stephen Wolfram's Edge Bio Page


Go to stand-alone video: :
 

The Cul-de-Sac of the Computational Metaphor

[5.13.19]

Have we gotten into a cul-de-sac in trying to understand animals as machines from the combination of digital thinking and the crack cocaine of computation uber alles that Moore's law has provided us? What revised models of brains might we be looking at to provide new ways of thinking and studying the brain and human behavior? Did the Macy Conferences get it right? Is it time for a reboot?­­­

RODNEY BROOKS is Panasonic Professor of Robotics, emeritus, MIT; former director of the MIT Artificial Intelligence Laboratory and the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL); founder, chairman, and CTO of Rethink Robotics; and author of Flesh and Machines. Rodney Brooks's Edge Bio Page


 

How to Create an Institution That Lasts 10,000 Years

[4.24.19]

We’re also looking at the oldest living companies in the world, most of which are service-based. There are some family-run hotels and things like that, but also a huge amount in the food and beverage industry. Probably a third of the organizations or the companies over 500 or 1,000 years old are all in some way in wine, beer, or sake production. I was intrigued by that crossover.

What’s interesting is that humanity figured out how to ferment things about 10,000 years ago, which is exactly the time frame where people started creating cities and agriculture. It’s unclear if civilization started because we could ferment things, or we started fermenting things and therefore civilization started, but there’s clearly this intertwined link with fermenting beer, wine, and then much later spirits, and how that fits in with hospitality and places that people gather.

All of these things are right now just nascent bits and pieces of trying to figure out some of the ways in which organizations live for a very long time. While some of them, like being a family-run hotel, may not be very portable as an idea, some of them, like some of the natural strategies, we're just starting to understand how they can be of service to humanity. If we broaden the idea of service industry to our customer civilization, how can you make an institution whose customer is civilization and can last for a very long time?

ALEXANDER ROSE is the executive director of The Long Now Foundation, manager of the 10,000 Year Clock Project, and curator of the speaking series' at The Interval and The Battery SF. Alexander Rose's Edge Bio Page


 

Machines Like Me

[4.16.19]

I would like to set aside the technological constraints in order to imagine how an embodied artificial consciousness might negotiate the open system of human ethics—not how people think they should behave, but how they do behave. For example, we may think the rule of law is preferable to revenge, but matters get blurred when the cause is just and we love the one who exacts the revenge. A machine incorporating the best angel of our nature might think otherwise. The ancient dream of a plausible artificial human might be scientifically useless but culturally irresistible. At the very least, the quest so far has taught us just how complex we (and all creatures) are in our simplest actions and modes of being. There’s a semi-religious quality to the hope of creating a being less cognitively flawed than we are.

IAN MCEWAN is a novelist whose works have earned him worldwide critical acclaim. He is the recipient of the Man Booker Prize for Amsterdam (1998), the National Book Critics' Circle Fiction Award, and the Los Angeles Times Prize for Fiction for Atonement (2003). His most recent novel is Machines Like Me. Ian McEwan's Edge Bio Page


Go to stand-alone video: :
 

Pages