Special Events

The Cul-de-Sac of the Computational Metaphor

Topic: 

  • Special Events
https://vimeo.com/299532050

Have we gotten into a cul-de-sac in trying to understand animals as machines from the combination of digital thinking and the crack cocaine of computation uber alles that Moore's law has provided us? What revised models of brains might we be looking at to provide new ways of thinking and studying the brain and human behavior? Did the Macy Conferences get it right? Is it time for a reboot?­­­

The Possible Minds Conference

I am puzzled by the number of references to what AI “is” and what it “cannot do” when in fact the new AI is less than ten years old and is moving so fast that references to it in the present tense are dated almost before they are uttered. The statements that AI doesn’t know what it’s talking about or is not enjoying itself are trivial if they refer to the present and undefended if they refer to the medium-range future—say 30 years.  —Daniel Kahneman

From left: W. Daniel Hillis, Neil Gershenfeld, Frank Wilczek, David Chalmers, Robert Axelrod, Tom Griffiths, Caroline Jones, Peter Galison, Alison Gopnik, John Brockman, George Dyson, Freeman Dyson, Seth Lloyd, Rod Brooks, Stephen Wolfram, Ian McEwan. In absentia: Andy Clark, George M. Church, Daniel Kahneman, Alex "Sandy" Pentland, Venki Ramakrishnan  (Click to expand photo) 


INTRODUCTION
by Venki Ramakrishnan

The field of machine learning and AI is changing at such a rapid pace that we cannot foresee what new technical breakthroughs lie ahead, where the technology will lead us or the ways in which it will completely transform society. So it is appropriate to take a regular look at the landscape to see where we are, what lies ahead, where we should be going and, just as importantly, what we should be avoiding as a society. We want to bring a mix of people with deep expertise in the technology as well as broad thinkers from a variety of disciplines to make regular critical assessments of the state and future of AI. 

Venki Ramakrishnan, President of the Royal Society and Nobel Laureate in Chemistry, 2009, is Group Leader & Former Deputy Director, MRC Laboratory of Molecular Biology; Author, Gene Machine: The Race to Decipher the Secrets of the Ribosome.  


[ED. NOTE: In recent months, Edge has published the fifteen individual talks and discussions from its two-and-a-half-day Possible Minds Conference held in Morris, CT, an update from the field following on from the publication of the group-authored book Possible Minds: Twenty-Five Ways of Looking at AI. As a special event for the long Thanksgiving weekend, we are pleased to publish the complete conference—10 hours plus of audio and video, as well as a downloadable PDF of the 77,500-word manuscript. Enjoy.] 
 
Editor, Edge

Is Superintelligence Impossible?

On Possible Minds: Philosophy and AI with Daniel C. Dennett and David Chalmers
John Brockman
[4.10.19]

[ED. NOTE: On Saturday, March 9th, more than 1200 people jammed into Pioneer Works in Red Hook, Brooklyn, for a conversation between two of our greatest philosophers, David Chalmers and Daniel C. Dennett:  "Is Superintelligence Impossible?" the next event in Edge's ongoing "Possible Minds Project." Watch the video, listen to the EdgeCast, read the transcript. Thanks  to  physicist, artist, author, and Edgie Janna LevinDirector of Sciences at Pioneer Works, who presented the event with the support of Science Sandbox, a Simons Foundation initiative. —JB]


Reality Club Discussion: Andy Clark


Somebody said that the philosopher is the one who says, "We know it’s possible in practice, we’re trying to figure out if it’s possible in principle." Unfortunately, philosophers sometimes spend too much time worrying about logical possibilities that are importantly negligible in every other regard. So, let me go on the record as saying, yes, I think that conscious AI is possible because, after all, what are we? We’re conscious. We’re robots made of robots made of robots. We’re actual. In principle, you could make us out of other materials. Some of your best friends in the future could be robots. Possible in principle, absolutely no secret ingredients, but we’re not going to see it. We’re not going to see it for various reasons. One is, if you want a conscious agent, we’ve got plenty of them around and they’re quite wonderful, whereas the ones that we would make would be not so wonderful. —Daniel C. Dennett

One of our questions here is, is superintelligence possible or impossible? I’m on the side of possible. I like the possible, which is one reason I like John’s theme, "Possible Minds." That’s a wonderful theme for thinking about intelligence, both natural and artificial, and consciousness, both natural and artificial. … The space of possible minds is absolutely vast—all the minds there ever have been, will be, or could be. Starting with the actual minds, I guess there have been a hundred billion or so humans with minds of their own. Some pretty amazing minds have been in there. Confucius, Isaac Newton, Jane Austen, Pablo Picasso, Martin Luther King, on it goes. But still, those hundred billion minds put together are just the tiniest corner of this space of possible minds. —David Chalmers

David Chalmers is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is best known for his work on consciousness, including his formulation of the “hard problem” of consciousness;  Daniel C. Dennett is University Professor and Austin B. Fletcher Professor of Philosophy and director of the Center for Cognitive Studies at Tufts University. He is the author of a dozen books, including Consciousness Explained, and, most recently, From Bacteria to Bach and Back: The Evolution of Minds;  John Brockman, moderator, is a cultural impresario whose career has encompassed the avant-garde art world, science, books, software, and the Internet. He is the author of By The Late John Brockman and The Third Culture; editor of the Edge Annual Question book series, and Possible Minds: 25 Ways of Looking at AI.

 

The Overdue Debate


Front Page

DEFGH Nr. 63, Freitag, 15. März 2019. 

Collage: Stefan Dimitrov

The Ghost in the Machine
Artificial intelligence inspires wild fantasies, but remains hard to imagine. A SZ series creates clarity. 

__________________________________________________

 
                    
 

__________________________________________________

Artificial intelligence:
A new series brings science and culture together to fathom the inexplicable

​_________________________________________________

Possible Minds

25 Ways of Looking at AI
John Brockman
[1.30.19]

 


Brockman and Minksy by Jean Pigozzi
B
rockman and Minsky, 1985       (enlarge      (Photo: Jean Pigozzi)

"A fascinating map of AI's likely future and an overview of the difficult choices that will shape it. . . . A sense of respect for the human mind and humility about its limitations runs through the essays in Possible Minds." —Foreign Affairs

"Intelligences born and intelligences made have a lot to offer each other. For that beneficial blend to occur, the contextual framing that the voices in this book spell out will be crucial." 
—Stewart Brand

"While the [Possible Minds] authors disagree on the answers, they agree on the major question: what dangers might AI present to humankind? Within that framework, the essays offer a host of novel ideas. . . . Enlightening, entertaining, and exciting reading."—Publishers Weekly

"Pithy essays on artificial intelligence. . . . Readers . . . will not find a better introduction than this book."—Kirkus 

"Brockman, founder of the online salon Edge.org, corrals 25 big brains—ranging from Nobel Prize-winning physicist Frank Wilczek to roboticist extraordinaire Rodney Brooks—to opine on this exhilarating, terrifying future."
Inc. ("10 Business Books You Need to Read in 2019")



Emergences

W. Daniel Hillis
[9.4.19]

My perspective is closest to George Dyson's. I liked his introducing himself as being interested in intelligence in the wild. I will copy George in that. That is what I’m interested in, too, but it’s with a perspective that makes it all in the wild. My interest in AI comes from a broader interest in a much more interesting question to which I have no answers (and can barely articulate the question): How do lots of simple things interacting emerge into something more complicated? Then how does that create the next system out of which that happens, and so on?

Consider the phenomenon, for instance, of chemicals organizing themselves into life, or single-cell organisms organizing themselves into multi-cellular organisms, or individual people organizing themselves into a society with language and things like that—I suspect that there’s more of that organization to happen. The AI that I’m interested in is a higher level of that and, like George, I suspect that not only will it happen, but it probably already is happening, and we’re going to have a lot of trouble perceiving it as it happens. We have trouble perceiving it because of this notion, which Ian McEwan so beautifully described, of the Golem being such a compelling idea that we get distracted by it, and we imagine it to be like that. That blinds us to being able to see it as it really is emerging. Not that I think such things are impossible, but I don’t think those are going to be the first to emerge.

There's a pattern in all of those emergences, which is that they start out as analog systems of interaction, and then somehow—chemicals have chains of circular pathways that metabolize stuff from the outside world and turn into circular pathways that are metabolizing—what always happens going up to the next level is those analog systems invent a digital system, like DNA, where they start to abstract out the information processing. So, they put the information processing in a separate system of its own. From then on, the interesting story becomes the story in the information processing. The complexity happens more in the information processing system. That certainly happens again with multi-cellular organisms. The information processing system is neurons, and they eventually go from just a bunch of cells to having this special information processing system, and that’s where the action is in the brains and behavior. It drags along and makes much more complicated bodies much more interesting once you have behavior.

W. DANIEL HILLIS is an inventor, entrepreneur, and computer scientist, Judge Widney Professor of Engineering and Medicine at USC, and author of The Pattern on the Stone: The Simple Ideas That Make Computers Work. W. Daniel Hillis's Edge Bio Page

Communal Intelligence

Seth Lloyd
[10.28.19]

We haven't talked about the socialization of intelligence very much. We talked a lot about intelligence as being individual human things, yet the thing that distinguishes humans from other animals is our possession of human language, which allows us both to think and communicate in ways that other animals don’t appear to be able to. This gives us a cooperative power as a global organism, which is causing lots of trouble. If I were another species, I’d be pretty damn pissed off right now. What makes human beings effective is not their individual intelligences, though there are many very intelligent people in this room, but their communal intelligence.

SETH LLOYD is a theoretical physicist at MIT; Nam P. Suh Professor in the Department of Mechanical Engineering; external professor at the Santa Fe Institute; and author of Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos. Seth Lloyd's Edge Bio Page

=


COMMUNAL INTELLIGENCE

SETH LLOYD: I’m a bit embarrassed because I’ve benefited so much by going close to last in this meeting. I’ve heard so many wonderful things and so many great ideas, which I will shamelessly parrot while trying to ascribe them to the people who mentioned them. This has been a fantastic meeting.

When John first talked about doing something like the Macy Conferences, I didn’t know what they were, so I went back and started to look at that. It was remarkable how prescient the ideas seemed to be. I couldn’t understand that, because why was it that all of a sudden we’re now extremely worried and interested in AI and devices that mimic neural networks? People were worried about it back then, and yet for decades it didn’t seem like people were that worried about this.

Epistemic Virtues

Peter Galison
[8.21.19]

I’m interested in the question of epistemic virtues, their diversity, and the epistemic fears that they’re designed to address. By epistemic I mean how we gain and secure knowledge. What I’d like to do here is talk about what we might be afraid of, where our knowledge might go astray, and what aspects of our fears about how what might misfire can be addressed by particular strategies, and then to see how that’s changed quite radically over time.

~~

James Clerk Maxwell, just by way of background, had done these very mechanical representations of electromagnetism—gears and ball bearings, and strings and rubber bands. He loved doing that. He’s also the author of the most abstract treatise on electricity and magnetism, which used the least action principle and doesn’t go by the pictorial, sensorial path at all. In this very short essay, he wrote, "Some people gain their understanding of the world by symbols and mathematics. Others gain their understanding by pure geometry and space. There are some others that find an acceleration in the muscular effort that is brought to them in understanding, in feeling the force of objects moving through the world. What they want are words of power that stir their souls like the memory of childhood. For the sake of persons of these different types, whether they want the paleness and tenuity of mathematical symbolism, or they want the robust aspects of this muscular engagement, we should present all of these ways. It’s the combination of them that give us our best access to truth." 

PETER GALISON is a science historian; Joseph Pellegrino University Professor and co-founder of the Black Hole Initiative at Harvard University; and author of Einstein's Clocks and Poincaré’s Maps: Empires of Time. Peter Galison's Edge Bio Page

AI That Evolves in the Wild

George Dyson
[8.14.19]

I’m interested not in domesticated AI—the stuff that people are trying to sell. I'm interested in wild AI—AI that evolves in the wild. I’m a naturalist, so that’s the interesting thing to me. Thirty-four years ago there was a meeting just like this in which Stanislaw Ulam said to everybody in the room—they’re all mathematicians—"What makes you so sure that mathematical logic corresponds to the way we think?" It’s a higher-level symptom. It’s not how the brain works. All those guys knew fully well that the brain was not fundamentally logical.

We’re in a transition similar to the first Macy Conferences. The Teleological Society, which became the Cybernetics Group, started in 1943 at a time of transition, when the world was full of analog electronics at the end of World War II. We had built all these vacuum tubes and suddenly there was free time to do something with them, so we decided to make digital computers. And we had the digital revolution. We’re now at exactly the same tipping point in history where we have all this digital equipment, all these machines. Most of the time they’re doing nothing except waiting for the next single instruction. The funny thing is, now it’s happening without people intentionally. There we had a very deliberate group of people who said, "Let’s build digital machines." Now, I believe we are building analog computers in a very big way, but nobody’s organizing it; it’s just happening.

GEORGE DYSON is a historian of science and technology and author of Darwin Among the Machines and Turing’s Cathedral. George Dyson's Edge Bio Page

Pages

Subscribe to RSS - Special Events