Videos in: 2017

"A Difference That Makes a Difference"

Daniel C. Dennett
[11.22.17]

Having turned my back on propositions, I thought, what am I going to do about this? The area where it really comes up is when you start looking at the contents of consciousness, which is my number one topic. I like to quote Maynard Keynes on this. He was once asked, “Do you think in words or pictures?” to which he responded, “I think in thoughts.” It was a wonderful answer, but also wonderfully uninformative. What the hell’s a thought then? How does it carry information? Is it like a picture? Is it iconic in some way? Does it resemble what it’s about, or is it like a word that refers to what it’s about without resembling it? Are there third, fourth, fifth alternatives? Looking at information in the brain and then trying to trace it back to information in the genes that must be responsible for providing the design of the brain that can then carry information in other senses, you gradually begin to realize that this does tie in with Shannon-Weaver information theory. There’s a way of seeing information as "a difference that makes a difference," to quote Donald MacKay and Bateson.

Ever since then, I’ve been trying to articulate, with the help of Harvard evolutionary biologist David Haig, just what meaning is, what content is, and ultimately, in terms of biological information and physical information, the information of Shannon and Weaver. There’s a chapter in my latest book called “What is Information?” I stand by it, but it’s under revision. I’m already moving beyond it and realizing there’s a better way of tackling some of these issues.

DANIEL C. DENNETT is the Austin B. Fletcher Professor of Philosophy and co-director of the Center for Cognitive Studies at Tufts University. He is the author, most recently, of From Bacteria to Bach and Back: The Evolution of Minds. Daniel C. Dennett's Edge Bio page

 


Go to stand-alone video: :
 

The Human Strategy

Alex "Sandy" Pentland
[10.30.17]

The idea of a credit assignment function, reinforcing “neurons” that work, is the core of current AI. And if you make those little neurons that get reinforced smarter, the AI gets smarter. So, what would happen if the neurons were people? People have lots of capabilities; they know lots of things about the world; they can perceive things in a human way. What would happen if you had a network of people where you could reinforce the ones that were helping and maybe discourage the ones that weren't?

That begins to sound like a society or a company. We all live in a human social network. We're reinforced for things that seem to help everybody and discouraged from things that are not appreciated. Culture is something that comes from a sort of human AI, the function of reinforcing the good and penalizing the bad, but applied to humans and human problems. Once you realize that you can take this general framework of AI and create a human AI, the question becomes, what's the right way to do that? Is it a safe idea? Is it completely crazy?

ALEX "SANDY" PENTLAND is a professor at MIT, and director of the MIT Connection Science and Human Dynamics labs. He is a founding member of advisory boards for Google, AT&T, Nissan, and the UN Secretary General. He is the author of Social Physics, and Honest Signal. Sandy Pentland's Edge Bio page


Go to stand-alone video: :
 

Shut Up and Measure

Brian G. Keating
[10.20.17]

What is fascinating to me is that we are now hoping, with modern measurements, to probe the early Universe. In doing so, we’re encountering deep questions about the scientific method and questions about what is fundamental to physics. When we look out on the Universe, we’re looking through this dirty window, literally a dusty window. We look out through dust in our galaxy. And what is that dust? I like to call it nano planets, tiny grains of iron and carbon and silicon—all these things that are the matter of our solar system. They’re the very matter that Galileo was looking through when he first glimpsed the Pleiades and the stars beyond the solar system for the first time.

When we look out our telescopes, we never see just what we're looking for. We have to contend with everything in the foreground. And thank goodness for that dust in the foreground, for without it, we would not be here.

BRIAN KEATING is a professor of physics at the Center for Astrophysics & Space Sciences at the University of California, San Diego. Brian Keating's Edge Bio page


Go to stand-alone video: :
 

Reality is an Activity of the Most August Imagination

Tim O'Reilly
[10.2.17]

Wallace Stevens had an immense insight into the way that we write the world. We don't just read it, we don't just see it, we don't just take it in. In "An Ordinary Evening in New Haven," he talks about the dialogue between what he calls the Naked Alpha and the hierophant Omega, the beginning, the raw stuff of reality, and what we make of it. He also said reality is an activity of the most august imagination.

Our job is to imagine a better future, because if we can imagine it, we can create it. But it starts with that imagination. The future that we can imagine shouldn't be a dystopian vision of robots that are wiping us out, of climate change that is going to destroy our society. It should be vision of how we will rise to the challenges that we face in the next century, and that we will build enduring civilization, and we will build a world that is better for our children and grandchildren and great-grandchildren. That we will become one of those long-lasting species rather than a flash in the pan that wipes itself out because of its lack of foresight.

We are at a critical moment in human history. In the small, we are at a critical moment in our economy, where we have to make it work better for everyone, not just for a select few. But in the large, we have to make it better in the way that we deal with long-term challenges and long-term problems.

TIM O'REILLY is the founder and CEO of O'Reilly Media, Inc., and the author of WTF?: What’s the Future and Why It’s Up to Us. Tim O'Reilly's Edge Bio page


 

Aerodynamics For Cognition

Tom Griffiths
[8.21.17]

It's very clear that in order to make progress in understanding some of the most challenging and important things about intelligence, studying the best example we have of an intelligent system is a way to do that. Often, people who argue against that make the analogy that if we were trying to understand how to build jet airplanes, then starting with birds is not necessarily a good way to do that.                                 

That analogy is pretty telling. The thing that's critical to both making jet airplanes work and making birds fly is the structure of the underlying problem that they're solving. That problem is keeping an object airborne, and the structure of that problem is constrained by aerodynamics. By studying how birds fly and the structure of their wings, you can learn something important about aerodynamics. And what you learn about aerodynamics is equally relevant to then being able to make jet engines.                                 

The kind of work that I do is focused on trying to identify the equivalent of aerodynamics for cognition. What are the real abstract mathematical principles that constrain intelligence? What can we learn about those principles by studying human beings? 

TOM GRIFFITHS is a professor of psychology and cognitive science and director of the Computational Cognitive Science Lab and the Institute of Cognitive and Brain Sciences at the University of California, Berkeley. He is co-author (with Brian Christian) of Algorithms to Live By. Tom Griffiths's Edge Bio page


Go to stand-alone video: :
 

Learning By Thinking

Tania Lombrozo
[7.28.17]

Sometimes you think you understand something, and when you try to explain it to somebody else, you realize that maybe you gained some new insight that you didn't have before. Maybe you realize you didn't understand it as well as you thought you did. What I think is interesting about this process is that it’s a process of learning by thinking. When you're explaining to yourself or to somebody else without them providing feedback, insofar as you gain new insight or understanding, it isn't driven by that new information that they've provided. In some way, you've rearranged what was already in your head in order to get new insight.

The process of trying to explain to yourself is a lot like a thought experiment in science. For the most part, the way that science progresses is by going out, conducting experiments, getting new empirical data, and so on. But occasionally in the history of science, there've been these important episodes—Galileo, Einstein, and so on—where somebody will get some genuinely new insight from engaging in a thought experiment. 

TANIA LOMBROZO is a professor of psychology at the University of California, Berkeley, as well as an affiliate of the Department of Philosophy and a member of the Institute for Cognitive and Brain Sciences. She is a contributor to Psychology Today and the NPR blog 13.7: Cosmos & Culture. Tania Lombrozo's Edge Bio page

 


Go to stand-alone video: :
 

Things to Hang on Your Mental Mug Tree

Rory Sutherland
[7.10.17]

I don't think there's any huge amount of intelligence required to look at the world through different lenses. The difficulty lies in that you have to abandon four or five assumptions about the world simultaneously. That's what probably makes it difficult.

RORY SUTHERLAND is Executive Creative Director and Vice-Chairman, OgilvyOne London; Vice-Chairman, Ogilvy & Mather UK; Columnist, The SpectatorRory Sutherland's Edge Bio page


Go to stand-alone video: :
 

Compassionate Systems

Daniel Goleman
[6.22.17]
One way a systems perspective could help with the environmental crisis is through understanding that we have a very narrow range of affordances, the choices presented to us. For example, I have this jacket, you have this table or the chair I’m sitting on, and they are manufactured with industrial platforms that have more or less been the same for a century. Yet in the last ten or fifteen years we’ve seen the emergence of industrial ecology, a science that offers a metric for understanding the impacts of the life cycle of any of these objects from beginning to end in terms of how they impact the global systems that support life on our planet – the carbon cycle being the best-known. Now that we have that data and a metric for it, we can better manage the processes that are entailed in the use and manufacture of every object we own. We have a metric for reinventing everything in the material world to be supportive of those life-support systems.
 
DANIEL GOLEMAN is the New York Times bestselling author of Emotional Intelligence. A psychologist and science journalist, he reported on brain and behavioral research for The New York Times for many years. He is the author of more than a dozen books, including three accounts of meetings he has moderated between the Dalai Lama and scientists, psychotherapists, and social activists. Daniel Goleman's Edge Bio Page

Go to stand-alone video: :
 

Curtains For Us All?

Martin Rees
[5.31.17]

Here on Earth, I suspect that we are going to want to regulate the application of genetic modification and cyborg techniques on grounds of ethics and prudence. This links with another topic I want to come to later, which is the risks of new technology. If we imagine these people living as pioneers on Mars, they are out of range of any terrestrial regulation. Moreover, they've got a far higher incentive to modify themselves or their descendants to adapt to this very alien and hostile environment.                                 

They will use all the techniques of genetic modification, cyborg techniques, maybe even linking or downloading themselves into machines, which, fifty years from now, will be far more powerful than they are today. The post-human era is probably not going to start here on Earth; it will be spearheaded by these communities on Mars. That's the vision I would have of Mars. It's people out there who will perhaps lead to these developments, which will then eventually lead to posthumans, maybe electronic rather than organic, spreading far beyond our solar system. If that's happened elsewhere, that's the sort of thing we might detect. 

LORD MARTIN REES is a Fellow of Trinity College and Emeritus Professor of Cosmology and Astrophysics at the University of Cambridge. He is the UK's Astronomer Royal and a Past President of the Royal Society. Martin Rees's Edge Bio Page

 


Go to stand-alone video: :
 

The Threat

Ross Anderson
[5.8.17]

Although a security failure may be due to someone using the wrong type of access control mechanism or weak cypher, the underlying reason for that is very often one of incentives. Fundamentally, the problem is that when Alice guards a system and Bob pays the cost of failure, things break. Put in those terms, it’s simple and straightforward, but it’s often much more complicated when we start looking at how things actually fail in real life.

ROSS ANDERSON is a professor of security engineering at Cambridge University, and one of the founders of the field of information security economics. He chairs the Foundation for Information Policy Research, and is a fellow of the Royal Society and the Royal Academy of Engineering. Ross Anderson's Edge Bio Page


Go to stand-alone video: :
 

Pages