Special Events

AI That Evolves in the Wild

Topic: 

  • Special Events
https://vimeo.com/304896827

I’m interested not in domesticated AI—the stuff that people are trying to sell. I'm interested in wild AI—AI that evolves in the wild. I’m a naturalist, so that’s the interesting thing to me. Thirty-four years ago there was a meeting just like this in which Stanislaw Ulam said to everybody in the room—they’re all mathematicians—"What makes you so sure that mathematical logic corresponds to the way we think?" It’s a higher-level symptom. It’s not how the brain works. All those guys knew fully well that the brain was not fundamentally logical.

The Language of Mind

Topic: 

  • Special Events
https://vimeo.com/304892724

Will every possible intelligent system somehow experience itself or model itself as having a mind? Is the language of mind going to be inevitable in an AI system that has some kind of model of itself? If you’ve just got an AI system that's modeling the world and not bringing itself into the equation, then it may need the language of mind to talk about other people if it wants to model them and model itself from the third-person perspective.

Morphogenesis for the Design of Design

Topic: 

  • Special Events
https://vimeo.com/304836541

As we work on the self-reproducing assembler and writing software that looks like hardware that respects geometry, they meet in morphogenesis. This is the thing I’m most excited about right now: the design of design. Your genome doesn’t store anywhere that you have five fingers. It stores a developmental program, and when you run it, you get five fingers. It’s one of the oldest parts of the genome. Hox genes are an example. It’s essentially the only part of the genome where the spatial order matters.

Ecology of Intelligence

Topic: 

  • Special Events
https://vimeo.com/299924925

I don't think a singularity is imminent, although there has been quite a bit of talk about it. I don't think the prospect of artificial intelligence outstripping human intelligence is imminent because the engineering substrate just isn’t there, and I don't see the immediate prospects of getting there. I haven’t said much about quantum computing, other people will, but if you’re waiting for quantum computing to create a singularity, you’re misguided. That crossover, fortunately, will take decades, if not centuries.

Humans: Doing More With Less

Topic: 

  • Special Events
https://vimeo.com/300785363

Imagine a superintelligent system with far more computational resources than us mere humans that’s trying to make inferences about what the humans who are surrounding it—which it thinks of as cute little pets—are trying to achieve so that it is then able to act in a way that is consistent with what those human beings might want.

A Separate Kind of Intelligence

Topic: 

  • Special Events
https://vimeo.com/300779422

Back in 1950, Turing argued that for a genuine AI we might do better by simulating a child’s mind than an adult’s. This insight has particular resonance given recent work on "life history" theory in evolutionary biology—the developmental trajectory of a species, particularly the length of its childhood, is highly correlated with adult intelligence and flexibility across a wide range of species. This trajectory is also reflected in brain development, with its distinctive transition from early proliferation to later pruning.

Collaboration and the Evolution of Disciplines

Topic: 

  • Special Events
https://vimeo.com/300773799

Cooperation achieves its beneficial effects by improving communication, promoting gains from specialization, enhancing organizational effectiveness, and reducing the risks of harmful conflict. Members of an institutionalized academic discipline jointly benefit in all these ways. Unfortunately, members of different disciplines typically do not. The boundaries of most disciplines were largely set 100 (plus or minus 50) years ago, and efforts to redraw the boundaries (e.g. at Irvine and Carnegie Mellon) have not been met with much success.

The Cul-de-Sac of the Computational Metaphor

Topic: 

  • Special Events
https://vimeo.com/299532050

Have we gotten into a cul-de-sac in trying to understand animals as machines from the combination of digital thinking and the crack cocaine of computation uber alles that Moore's law has provided us? What revised models of brains might we be looking at to provide new ways of thinking and studying the brain and human behavior? Did the Macy Conferences get it right? Is it time for a reboot?­­­

Is Superintelligence Impossible?

On Possible Minds: Philosophy and AI with Daniel C. Dennett and David Chalmers
[4.10.19]

[ED. NOTE: On Saturday, March 9th, more than 1200 people jammed into Pioneer Works in Red Hook, Brooklyn, for a conversation between two of our greatest philosophers, David Chalmers and Daniel C. Dennett:  "Is Superintelligence Impossible?" the next event in Edge's ongoing "Possible Minds Project." Watch the video, listen to the EdgeCast, read the transcript. Thanks  to  physicist, artist, author, and Edgie Janna LevinDirector of Sciences at Pioneer Works, who presented the event with the support of Science Sandbox, a Simons Foundation initiative. —JB]


Reality Club Discussion: Andy Clark


Somebody said that the philosopher is the one who says, "We know it’s possible in practice, we’re trying to figure out if it’s possible in principle." Unfortunately, philosophers sometimes spend too much time worrying about logical possibilities that are importantly negligible in every other regard. So, let me go on the record as saying, yes, I think that conscious AI is possible because, after all, what are we? We’re conscious. We’re robots made of robots made of robots. We’re actual. In principle, you could make us out of other materials. Some of your best friends in the future could be robots. Possible in principle, absolutely no secret ingredients, but we’re not going to see it. We’re not going to see it for various reasons. One is, if you want a conscious agent, we’ve got plenty of them around and they’re quite wonderful, whereas the ones that we would make would be not so wonderful. —Daniel C. Dennett

One of our questions here is, is superintelligence possible or impossible? I’m on the side of possible. I like the possible, which is one reason I like John’s theme, "Possible Minds." That’s a wonderful theme for thinking about intelligence, both natural and artificial, and consciousness, both natural and artificial. … The space of possible minds is absolutely vast—all the minds there ever have been, will be, or could be. Starting with the actual minds, I guess there have been a hundred billion or so humans with minds of their own. Some pretty amazing minds have been in there. Confucius, Isaac Newton, Jane Austen, Pablo Picasso, Martin Luther King, on it goes. But still, those hundred billion minds put together are just the tiniest corner of this space of possible minds. —David Chalmers

David Chalmers is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is best known for his work on consciousness, including his formulation of the “hard problem” of consciousness;  Daniel C. Dennett is University Professor and Austin B. Fletcher Professor of Philosophy and director of the Center for Cognitive Studies at Tufts University. He is the author of a dozen books, including Consciousness Explained, and, most recently, From Bacteria to Bach and Back: The Evolution of Minds;  John Brockman, moderator, is a cultural impresario whose career has encompassed the avant-garde art world, science, books, software, and the Internet. He is the author of By The Late John Brockman and The Third Culture; editor of the Edge Annual Question book series, and Possible Minds: 25 Ways of Looking at AI.

 

Pages

Subscribe to RSS - Special Events