Videos in: 2019

The Paradox of Self-Consciousness

[11.11.19]

I have been trying, under the banner of "new realism," to reconcile various philosophical and scientific traditions. I'm looking for a third way between various tensions. There's more to a human being than the fact that we are a bunch of cells that hang together in a certain way. Humans are not identical to any material energetic system, even though I also think that humans cannot exist without being, in part, grounded in a material energetic system. So, I am rejecting both brutal materialism, according to which we are nothing but an arrangement of cells, and brutal idealism, according to which our minds are transcendent affairs that peep into the universe in one way or another. Both are false, so there has to be a third way.

Similarly, between postmodernism, which denies objectivity, and various trends in cognitive science, which also threaten objectivity without fully undermining it, there has to be something in between. Similarly, for continental philosophy—European traditions, broadly construed—and analytic philosophy, which means philosophy at its best when practiced in Anglophone context; there has to be something in between. That space in between is what I call new realism. 

MARKUS GABRIEL holds the Chair for Epistemology, Modern and Contemporary Philosophy at the University of Bonn, where he is also Director of the International Center for Philosophy. Markus Gabriel's Edge Bio Page

 


Go to stand-alone video: :
 

Communal Intelligence

[10.28.19]

We haven't talked about the socialization of intelligence very much. We talked a lot about intelligence as being individual human things, yet the thing that distinguishes humans from other animals is our possession of human language, which allows us both to think and communicate in ways that other animals don’t appear to be able to. This gives us a cooperative power as a global organism, which is causing lots of trouble. If I were another species, I’d be pretty damn pissed off right now. What makes human beings effective is not their individual intelligences, though there are many very intelligent people in this room, but their communal intelligence.

SETH LLOYD is a theoretical physicist at MIT; Nam P. Suh Professor in the Department of Mechanical Engineering; external professor at the Santa Fe Institute; and author of Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos. Seth Lloyd's Edge Bio Page


Go to stand-alone video: :
 

The Nature of Moral Motivation

[10.16.19]

Go to stand-alone video: :
 

Rethinking Our Vision of Success

[10.10.19]

How do we understand that our 100,000-fold excess of numbers on this planet, plus what we do to feed ourselves, makes us a tumor on the body of the planet? I don't want the future that involves some end to us, which is a kind of surgery of the planet. That's not anybody's wish. How do we revert ourselves to normal while we can? How do we re-enter the world of natural selection, not by punishing each other, but by volunteering to take success as meaning success and survival of the future, not success in stuff now? How do we do that? We don't have a language for that.

ROBERT POLLACK is a professor of biological sciences, and also serves as director of the University Seminars at Columbia University. He is the author of The Course of Nature, a book of drawings by the artist Amy Pollack, accompanied by his short explanatory essays. Robert Pollack's Edge Bio Page


Go to stand-alone video: :
 

A Post-Galilean Paradigm

[9.24.19]

We're now going through a phase of history where people are so blown away at the success of physical science and the wonderful technology it's produced that they've forgotten its philosophical underpinnings. They've forgotten its inherent limitations. If we want a science of consciousness, we need to move beyond Galileo. We need to move to what I call a post-Galilean paradigm. We need to rethink what science is. That doesn't mean we stop doing physical science or we do physical science differently—I'm not here to tell physical scientists how to do their jobs. It does, however, mean that it's not the full story. We need physical science to encompass a more expansive conception of the scientific method. We need to adopt a worldview that can accommodate both the quantitative data of physical science and the qualitative reality of consciousness. That's essentially the problem.

Fortunately, there is a way forward. There is a framework that could allow us to make progress on this. It's inspired by certain writings from the 1920s of the philosopher Bertrand Russell and the scientist Arthur Eddington, who is incidentally the first scientist to confirm general relativity after the First World War. I'm inclined to think that these guys did in the 1920s for the science of consciousness what Darwin did in the 19th century for the science of life. It's a tragedy of history that this was completely forgotten about for a long time for various historical reasons we could talk about. But, it's recently been rediscovered in the last five or ten years in academic philosophy, and it's causing a lot of excitement and interest.

PHILIP GOFF is a philosopher and consciousness researcher at Durham University, UK, and author of Galileo's Error: Foundations for a New Science of Consciousness (forthcoming, 2019). Philip Goff's Edge Bio Page


Go to stand-alone video: :
 

The Universe Is Not in a Box

[9.11.19]

One of the great books in science was published in 1824 by a young Frenchman called Sadi Carnot. This is one of the most wonderful books, the title of which is Reflections on the Motive Power of Fire. In about six pages, he explains how you would make a steam engine that would work with the absolute maximum efficiency possible. It was almost entirely ignored, and he died before anything much could come out of it. It was rediscovered in 1849 when William Thomson, who later became Lord Kelvin, wrote a paper which publicized this work. Within a couple of years, thermodynamics had been created as a science.

It caused a tremendous lot of excitement from the 1850s onwards. The key thing about this work of Carnot's is that if you have a steam engine, the steam has to remain in a cylinder in a box. You want the steam engine to work continuously, so you keep on having to bring the steam and the cylinder back to the condition it was before. It's remarkable that the development of what's called statistical mechanics—understanding how steam behaves—led to the discovery of entropy, one of the great discoveries in the history of science, and it was all followed out of this work of Carnot on how steam engines work. And moreover, it was very anthropocentric thinking about how human beings could exploit coal to drive steam engines and do work for them. At that stage, nobody was thinking about the universe as a whole; they were just thinking about how they could make steam engines work better.

This way of thinking, I believe, has survived more or less unchanged to this day. You still find that people who work on this problem of the arrow of time are still assuming conditions that are appropriate for a steam engine. But in the 1920s and early 1930s, Hubble showed that the universe was expanding, that we live in an expanding universe. Is that going to be well modeled by steam in a box? My belief is that people haven't realized that we have to think out of the box. We have to think in different ways. We keep on finding ways in which the mathematics that was developed before to understand systems confined in a box have to be modified with quite surprising consequences and, above all, possibly to explain why we have this incredibly powerful sense of the passage of time, why the past is so different from the future.

JULIAN BARBOUR is a theoretical physicist specializing in the study of time and motion; visiting professor of physics at the University of Oxford; and author of The Janus Point (forthcoming) and The End of TimeJulian Barbour's Edge Bio Page


Go to stand-alone video: :
 

Emergences

[9.4.19]

My perspective is closest to George Dyson's. I liked his introducing himself as being interested in intelligence in the wild. I will copy George in that. That is what I’m interested in, too, but it’s with a perspective that makes it all in the wild. My interest in AI comes from a broader interest in a much more interesting question to which I have no answers (and can barely articulate the question): How do lots of simple things interacting emerge into something more complicated? Then how does that create the next system out of which that happens, and so on?

Consider the phenomenon, for instance, of chemicals organizing themselves into life, or single-cell organisms organizing themselves into multi-cellular organisms, or individual people organizing themselves into a society with language and things like that—I suspect that there’s more of that organization to happen. The AI that I’m interested in is a higher level of that and, like George, I suspect that not only will it happen, but it probably already is happening, and we’re going to have a lot of trouble perceiving it as it happens. We have trouble perceiving it because of this notion, which Ian McEwan so beautifully described, of the Golem being such a compelling idea that we get distracted by it, and we imagine it to be like that. That blinds us to being able to see it as it really is emerging. Not that I think such things are impossible, but I don’t think those are going to be the first to emerge.

There's a pattern in all of those emergences, which is that they start out as analog systems of interaction, and then somehow—chemicals have chains of circular pathways that metabolize stuff from the outside world and turn into circular pathways that are metabolizing—what always happens going up to the next level is those analog systems invent a digital system, like DNA, where they start to abstract out the information processing. So, they put the information processing in a separate system of its own. From then on, the interesting story becomes the story in the information processing. The complexity happens more in the information processing system. That certainly happens again with multi-cellular organisms. The information processing system is neurons, and they eventually go from just a bunch of cells to having this special information processing system, and that’s where the action is in the brains and behavior. It drags along and makes much more complicated bodies much more interesting once you have behavior.

W. DANIEL HILLIS is an inventor, entrepreneur, and computer scientist, Judge Widney Professor of Engineering and Medicine at USC, and author of The Pattern on the Stone: The Simple Ideas That Make Computers Work. W. Daniel Hillis's Edge Bio Page


Go to stand-alone video: :
 

Epistemic Virtues

[8.21.19]

I’m interested in the question of epistemic virtues, their diversity, and the epistemic fears that they’re designed to address. By epistemic I mean how we gain and secure knowledge. What I’d like to do here is talk about what we might be afraid of, where our knowledge might go astray, and what aspects of our fears about how what might misfire can be addressed by particular strategies, and then to see how that’s changed quite radically over time.

~~

James Clerk Maxwell, just by way of background, had done these very mechanical representations of electromagnetism—gears and ball bearings, and strings and rubber bands. He loved doing that. He’s also the author of the most abstract treatise on electricity and magnetism, which used the least action principle and doesn’t go by the pictorial, sensorial path at all. In this very short essay, he wrote, "Some people gain their understanding of the world by symbols and mathematics. Others gain their understanding by pure geometry and space. There are some others that find an acceleration in the muscular effort that is brought to them in understanding, in feeling the force of objects moving through the world. What they want are words of power that stir their souls like the memory of childhood. For the sake of persons of these different types, whether they want the paleness and tenuity of mathematical symbolism, or they want the robust aspects of this muscular engagement, we should present all of these ways. It’s the combination of them that give us our best access to truth." 

PETER GALISON is a science historian; Joseph Pellegrino University Professor and co-founder of the Black Hole Initiative at Harvard University; and author of Einstein's Clocks and Poincaré’s Maps: Empires of Time. Peter Galison's Edge Bio Page

 


Go to stand-alone video: :
 

AI That Evolves in the Wild

[8.14.19]

I’m interested not in domesticated AI—the stuff that people are trying to sell. I'm interested in wild AI—AI that evolves in the wild. I’m a naturalist, so that’s the interesting thing to me. Thirty-four years ago there was a meeting just like this in which Stanislaw Ulam said to everybody in the room—they’re all mathematicians—"What makes you so sure that mathematical logic corresponds to the way we think?" It’s a higher-level symptom. It’s not how the brain works. All those guys knew fully well that the brain was not fundamentally logical.

We’re in a transition similar to the first Macy Conferences. The Teleological Society, which became the Cybernetics Group, started in 1943 at a time of transition, when the world was full of analog electronics at the end of World War II. We had built all these vacuum tubes and suddenly there was free time to do something with them, so we decided to make digital computers. And we had the digital revolution. We’re now at exactly the same tipping point in history where we have all this digital equipment, all these machines. Most of the time they’re doing nothing except waiting for the next single instruction. The funny thing is, now it’s happening without people intentionally. There we had a very deliberate group of people who said, "Let’s build digital machines." Now, I believe we are building analog computers in a very big way, but nobody’s organizing it; it’s just happening.

GEORGE DYSON is a historian of science and technology and author of Darwin Among the Machines and Turing’s Cathedral. George Dyson's Edge Bio Page


Go to stand-alone video: :
 

The Language of Mind

[8.8.19]

Will every possible intelligent system somehow experience itself or model itself as having a mind? Is the language of mind going to be inevitable in an AI system that has some kind of model of itself? If you’ve just got an AI system that's modeling the world and not bringing itself into the equation, then it may need the language of mind to talk about other people if it wants to model them and model itself from the third-person perspective. If we’re working towards artificial general intelligence, it's natural to have AIs with models of themselves, particularly with introspective self-models, where they can know what’s going on in some sense from the first-person perspective.

Say you do something that negatively affects an AI, something that in an ordinary human would correspond to damage and pain. Your AI is going to say, "Please don’t do that. That’s very bad." Introspectively, it’s a model that recognizes someone has caused one of those states it calls pain. Is it going to be an inevitable consequence of introspective self-models in AI that they start to model themselves as having something like consciousness? My own suspicion is that there's something about the mechanisms of self-modeling and introspection that are going to naturally lead to these intuitions, where an AI will model itself as being conscious. The next step is whether an AI of this kind is going to naturally experience consciousness as somehow puzzling, as something that potentially is hard to square with basic underlying mechanisms and hard to explain.

DAVID CHALMERS is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is best known for his work on consciousness, including his formulation of the "hard problem" of consciousness. David Chalmers's Edge Bio Page


Go to stand-alone video: :
 

Pages