All Videos

The Brain Is Full of Maps

[6.11.19]

 I was talking about maps and feelings, and whether the brain is analog or digital. I’ll give you a little bit of what I wrote:

Brains use maps to process information. Information from the retina goes to several areas of the brain where the picture seen by the eye is converted into maps of various kinds. Information from sensory nerves in the skin goes to areas where the information is converted into maps of the body. The brain is full of maps. And a big part of the activity is transferring information from one map to another.

As we know from our own use of maps, mapping from one picture to another can be done either by digital or by analog processing. Because digital cameras are now cheap and film cameras are old fashioned and rapidly becoming obsolete, many people assume that the process of mapping in the brain must be digital. But the brain has been evolving over millions of years and does not follow our ephemeral fashions. A map is in its essence an analog device, using a picture to represent another picture. The imaging in the brain must be done by direct comparison of pictures rather than by translations of pictures into digital form.

FREEMAN DYSON, emeritus professor of physics at the Institute for Advanced Study in Princeton, has worked on nuclear reactors, solid-state physics, ferromagnetism, astrophysics, and biology, looking for problems where elegant mathematics could be usefully applied. His books include Disturbing the UniverseWeapons and HopeInfinite in All Directions, and Maker of PatternsFreeman Dyson's Edge Bio Page

 


Go to stand-alone video: :
 

Perception As Controlled Hallucination

Predictive Processing and the Nature of Conscious Experience
[6.6.19]

Perception itself is a kind of controlled hallucination. . . . [T]he sensory information here acts as feedback on your expectations. It allows you to often correct them and to refine them. But the heavy lifting seems to be being done by the expectations. Does that mean that perception is a controlled hallucination? I sometimes think it would be good to flip that and just think that hallucination is a kind of uncontrolled perception. 

ANDY CLARK is professor of Cognitive Philosophy at the University of Sussex and author of Surfing Uncertainty: Prediction, Action, and the Embodied MindAndy Clark's Edge Bio Page


 

Mining the Computational Universe

[5.30.19]

I've spent several decades creating a computational language that aims to give a precise symbolic representation for computational thinking, suitable for use by both humans and machines. I'm interested in figuring out what can happen when a substantial fraction of humans can communicate in computational language as well as human language. It's clear that the introduction of both human spoken language and human written language had important effects on the development of civilization. What will now happen (for both humans and AI) when computational language spreads?

STEPHEN WOLFRAM is a scientist, inventor, and the founder and CEO of Wolfram Research. He is the creator of the symbolic computation program Mathematica and its programming language, Wolfram Language, as well as the knowledge engine Wolfram|Alpha. He is also the author of A New Kind of Science. Stephen Wolfram's Edge Bio Page


Go to stand-alone video: :
 

The Cul-de-Sac of the Computational Metaphor

[5.13.19]

Have we gotten into a cul-de-sac in trying to understand animals as machines from the combination of digital thinking and the crack cocaine of computation uber alles that Moore's law has provided us? What revised models of brains might we be looking at to provide new ways of thinking and studying the brain and human behavior? Did the Macy Conferences get it right? Is it time for a reboot?­­­

RODNEY BROOKS is Panasonic Professor of Robotics, emeritus, MIT; former director of the MIT Artificial Intelligence Laboratory and the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL); founder, chairman, and CTO of Rethink Robotics; and author of Flesh and Machines. Rodney Brooks's Edge Bio Page


 

How to Create an Institution That Lasts 10,000 Years

[4.24.19]

We’re also looking at the oldest living companies in the world, most of which are service-based. There are some family-run hotels and things like that, but also a huge amount in the food and beverage industry. Probably a third of the organizations or the companies over 500 or 1,000 years old are all in some way in wine, beer, or sake production. I was intrigued by that crossover.

What’s interesting is that humanity figured out how to ferment things about 10,000 years ago, which is exactly the time frame where people started creating cities and agriculture. It’s unclear if civilization started because we could ferment things, or we started fermenting things and therefore civilization started, but there’s clearly this intertwined link with fermenting beer, wine, and then much later spirits, and how that fits in with hospitality and places that people gather.

All of these things are right now just nascent bits and pieces of trying to figure out some of the ways in which organizations live for a very long time. While some of them, like being a family-run hotel, may not be very portable as an idea, some of them, like some of the natural strategies, we're just starting to understand how they can be of service to humanity. If we broaden the idea of service industry to our customer civilization, how can you make an institution whose customer is civilization and can last for a very long time?

ALEXANDER ROSE is the executive director of The Long Now Foundation, manager of the 10,000 Year Clock Project, and curator of the speaking series' at The Interval and The Battery SF. Alexander Rose's Edge Bio Page


 

Machines Like Me

[4.16.19]

I would like to set aside the technological constraints in order to imagine how an embodied artificial consciousness might negotiate the open system of human ethics—not how people think they should behave, but how they do behave. For example, we may think the rule of law is preferable to revenge, but matters get blurred when the cause is just and we love the one who exacts the revenge. A machine incorporating the best angel of our nature might think otherwise. The ancient dream of a plausible artificial human might be scientifically useless but culturally irresistible. At the very least, the quest so far has taught us just how complex we (and all creatures) are in our simplest actions and modes of being. There’s a semi-religious quality to the hope of creating a being less cognitively flawed than we are.

IAN MCEWAN is a novelist whose works have earned him worldwide critical acclaim. He is the recipient of the Man Booker Prize for Amsterdam (1998), the National Book Critics' Circle Fiction Award, and the Los Angeles Times Prize for Fiction for Atonement (2003). His most recent novel is Machines Like Me. Ian McEwan's Edge Bio Page


Go to stand-alone video: :
 

Is Superintelligence Impossible?

On Possible Minds: Philosophy and AI
[4.10.19]

[ED. NOTE: On Saturday, March 9th, more than 1200 people jammed into Pioneer Works in Red Hook, Brooklyn, for a conversation between two of our greatest philosophers, David Chalmers and Daniel C. Dennett, who ask each other, "Is Superintlligence Impossible?" As part of the Edgeongoing "Possible Minds Project," we are pleased to present the video, audio, and transcript of the event, which was orchestrated by the noted physicist, artist, author (and fellow Edgie), and Director of Sciences at Pioneer Works, Janna Levin, with the support of Science Sandbox, a Simons Foundation initiative dedicated to engaging everyone with the process of science. —JB]

Somebody said that the philosopher is the one who says, "We know it’s possible in practice, we’re trying to figure out if it’s possible in principle." Unfortunately, philosophers sometimes spend too much time worrying about logical possibilities that are importantly negligible in every other regard. So, let me go on the record as saying, yes, I think that conscious AI is possible because, after all, what are we? We’re conscious. We’re robots made of robots made of robots. We’re actual. In principle, you could make us out of other materials. Some of your best friends in the future could be robots. Possible in principle, absolutely no secret ingredients, but we’re not going to see it. We’re not going to see it for various reasons. One is, if you want a conscious agent, we’ve got plenty of them around and they’re quite wonderful, whereas the ones that we would make would be not so wonderful. —Daniel C. Dennett

One of our questions here is, is superintelligence possible or impossible? I’m on the side of possible. I like the possible, which is one reason I like John’s theme, "Possible Minds." That’s a wonderful theme for thinking about intelligence, both natural and artificial, and consciousness, both natural and artificial. … The space of possible minds is absolutely vast—all the minds there ever have been, will be, or could be, starting with the actual minds. There are a lot of actual minds. I guess there have been a hundred billion or so humans with minds of their own. Some pretty amazing minds have been Confucius, Isaac Newton, Jane Austen, Pablo Picasso, Martin Luther King, on it goes, a lot of amazing minds. But still, those hundred billion minds put together are just the tiniest corner of this space of possible minds. —David Chalmers

__

David Chalmers is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is best known for his work on consciousness, including his formulation of the “hard problem” of consciousness; Daniel C. Dennett is University Professor and Austin B. Fletcher Professor of Philosophy and director of the Center for Cognitive Studies at Tufts University. He is the author of a dozen books, including Consciousness Explained, and, most recently, From Bacteria to Bach and Back: The Evolution of Minds;  John Brockman, moderator, is a cultural impresario whose career has encompassed the avant-garde art world, science, books, software, and the Internet. He is the author of By The Late John Brockman and The Third Culture; editor of the Edge Annual Question book series, and Possible Minds: 25 Ways of Looking at AI.


Go to stand-alone video: :
 

Cultural Intelligence

[3.12.19]

Getting back to culture being invisible and omnipresent, we think about intelligence or emotional intelligence, but we rarely think about cultivating cultural intelligence. In this ever-increasing global world, we need to understand culture. All of this research has been trying to elucidate not just how we understand other people who are different from us, but how we understand ourselves.

MICHELE GELFAND is a Distinguished University Professor at the University of Maryland, College Park. She is the author of Rule Makers, Rule Breakers: How Tight and Loose Cultures Wire the WorldMichele Gelfand's Edge Bio Page

 


Go to stand-alone video: :
 

Alzheimer's Prevention

[2.11.19]

Right now, we don’t have therapies that regrow neurons. Alzheimer’s is a disease that kills your neurons over time, so once they’re gone they’re pretty much gone. There are things that one can do pharmaceutically to ameliorate the symptoms. For example, there are FDA-approved drugs such as acetylcholinesterase inhibitors or memantine, which do lessen or stabilize symptoms for a few years, but they can’t stop disease progression. What we’re interested in is disease modification, stopping it before it’s too severe or too advanced.

At the Alzheimer’s Prevention Clinic, we try to tell people what to do in a preventative way. There are a lot of other people and clinicians that are actively engaging in prevention as well. It’s new in my field, especially in the field of neurology. Until four years ago nobody would dare use the word “prevention” out loud because so many doctors and clinicians would just label you as a quack right away and you would lose credibility overnight. I find scientists are much more open to this now.

LISA MOSCONI is the director of the Women's Brain Initiative and the associate director of the Alzheimer's Prevention Clinic at Weill Cornell Medical College. She is the author of Brain Food: The Surprising Science of Eating for Cognitive PowerLisa Mosconi's Edge Bio Page

 


Go to stand-alone video: :
 

The Future of the Mind

How AI Technology Could Reshape the Human Mind and Create Alternate Synthetic Minds
[1.28.19]

I see many misunderstandings in current discussions about the nature of mind, such as the assumption that if we create sophisticated AI, it will inevitably be conscious. There is also this idea that we should “merge with AI”—that in order for humans to keep up with developments in AI and not succumb to hostile superintelligent AIs or AI-based technological unemployment, we need to enhance our own brains with AI technology.

One thing that worries me about all this is that don't think AI companies should be settling issues involving the shape of the mind. The future of the mind should be a cultural decision and an individual decision. Many of the issues at stake here involve classic philosophical problems that have no easy solutions. I’m thinking, for example, of theories of the nature of the person in the field of metaphysics. Suppose that you add a microchip to enhance your working memory, and then years later you add another microchip to integrate yourself with the Internet, and you just keep adding enhancement after enhancement. At what point will you even be you? When you think about enhancing the brain, the idea is to improve your life—to make you smarter, or happier, maybe even to live longer, or have a sharper brain as you grow older—but what if those enhancements change us in such drastic ways that we’re no longer the same person?

SUSAN SCHNEIDER holds the Distinguished Scholar chair at the Library of Congress and is the director of the AI, Mind and Society (“AIMS”) Group at the University of Connecticut. Susan Schneider's Edge Bio Page


Go to stand-alone video: :
 

Pages