Special Events

The Language of Mind


Will every possible intelligent system somehow experience itself or model itself as having a mind? Is the language of mind going to be inevitable in an AI system that has some kind of model of itself? If you’ve just got an AI system that's modeling the world and not bringing itself into the equation, then it may need the language of mind to talk about other people if it wants to model them and model itself from the third-person perspective. If we’re working towards artificial general intelligence, it's natural to have AIs with models of themselves, particularly with introspective self-models, where they can know what’s going on in some sense from the first-person perspective.

Say you do something that negatively affects an AI, something that in an ordinary human would correspond to damage and pain. Your AI is going to say, "Please don’t do that. That’s very bad." Introspectively, it’s a model that recognizes someone has caused one of those states it calls pain. Is it going to be an inevitable consequence of introspective self-models in AI that they start to model themselves as having something like consciousness? My own suspicion is that there's something about the mechanisms of self-modeling and introspection that are going to naturally lead to these intuitions, where an AI will model itself as being conscious. The next step is whether an AI of this kind is going to naturally experience consciousness as somehow puzzling, as something that potentially is hard to square with basic underlying mechanisms and hard to explain.

DAVID CHALMERS is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is best known for his work on consciousness, including his formulation of the "hard problem" of consciousness. David Chalmers's Edge Bio Page

Morphogenesis for the Design of Design


As we work on the self-reproducing assembler, and writing software that looks like hardware that respects geometry, they meet in morphogenesis. This is the thing I’m most excited about right now: the design of design. Your genome doesn’t store anywhere that you have five fingers. It stores a developmental program, and when you run it, you get five fingers. It’s one of the oldest parts of the genome. Hox genes are an example. It’s essentially the only part of the genome where the spatial order matters. It gets read off as a program, and the program never represents the physical thing it’s constructing. The morphogenes are a program that specifies morphogens that do things like climb gradients and symmetry break; it never represents the thing it’s constructing, but the morphogens then following the morphogenes give rise to you.

What’s going on in morphogenesis, in part, is compression. A billion bases can specify a trillion cells, but the more interesting thing that’s going on is almost anything you perturb in the genome is either inconsequential or fatal. The morphogenes are a curated search space where rearranging them is interesting—you go from gills to wings to flippers. The heart of success in machine learning, however you represent it, is function representation. The real progress in machine learning is learning representation. How you search hasn’t changed all that much, but how you represent search has. These morphogenes are a beautiful way to represent design. Technology today doesn’t do it. Technology today generally doesn’t distinguish genotype and phenotype in the sense that you explicitly represent what you’re designing. In morphogenesis, you never represent the thing you’re designing; it's done in a beautifully abstract way. For these self-reproducing assemblers, what we’re building is morphogenesis for the design of design. Rather than a combinatorial search over billions of degrees of freedom, you search over these developmental programs. This is one of the core research questions we’re looking at.

NEIL GERSHENFELD is the director of MIT’s Center for Bits and Atoms; founder of the global fab lab network; the author of FAB; and co-author (with Alan Gershenfeld & Joel Cutcher-Gershenfeld) of Designing Reality. Neil Gershenfeld's Edge Bio Page

Ecology of Intelligence


I don't think a singularity is imminent, although there has been quite a bit of talk about it. I don't think the prospect of artificial intelligence outstripping human intelligence is imminent because the engineering substrate just isn’t there, and I don't see the immediate prospects of getting there. I haven’t said much about quantum computing, other people will, but if you’re waiting for quantum computing to create a singularity, you’re misguided. That crossover, fortunately, will take decades, if not centuries.

There’s this tremendous drive for intelligence, but there will be a long period of coexistence in which there will be an ecology of intelligence. Humans will become enhanced in different ways and relatively trivial ways with smartphones and access to the Internet, but also the integration will become more intimate as time goes on. Younger people who interact with these devices from childhood will be cyborgs from the very beginning. They will think in different ways than current adults do.

FRANK WILCZEK is the Herman Feshbach Professor of Physics at MIT, recipient of the 2004 Nobel Prize in physics, and author of A Beautiful Question: Finding Nature’s Deep DesignFrank Wilczek's Edge Bio Page

Humans: Doing More With Less


Imagine a superintelligent system with far more computational resources than us mere humans that’s trying to make inferences about what the humans who are surrounding it—which it thinks of as cute little pets—are trying to achieve so that it is then able to act in a way that is consistent with what those human beings might want. That system needs to be able to simulate what an agent with greater constraints on its cognitive resources should be doing, and it should be able to make inferences, like the fact that we’re not able to calculate the zeros of the Riemann zeta function or discover a cure for cancer. It doesn’t mean we’re not interested in those things; it’s just a consequence of the cognitive limitations that we have.

As a parent of two small children, a problem that I face all the time is trying to figure out what my kids want—kids who are operating in an entirely different mode of computation, and having to build a kind of internal model of how a toddler’s mind works such that it’s possible to unravel that and work out that there’s a particular motivation for the very strange pattern of actions that they’re taking.

Both from the perspective of understanding human cognition and from the perspective of being able to build AI systems that can understand human cognition, it’s desirable for us to have a better model of how rational agents should act if those rational agents have limited cognitive resources. That’s something I’ve been working on for the last few years. We have an approach to thinking about this that we call resource rationality. And this is closely related to similar ideas that are being proposed in the artificial intelligence literature. One of these ideas is the notion of bounded optimality, proposed by Stuart Russell.

TOM GRIFFITHS is the Henry R. Luce Professor of Information, Technology, Consciousness, and Culture at Princeton University. He is co-author (with Brian Christian) of Algorithms to Live By. Tom Griffiths's Edge Bio Page

A Separate Kind of Intelligence


It looks as if there’s a general relationship between the very fact of childhood and the fact of intelligence. That might be informative if one of the things that we’re trying to do is create artificial intelligences or understand artificial intelligences. In neuroscience, you see this pattern of development where you start out with this very plastic system with lots of local connection, and then you have a tipping point where that turns into a system that has fewer connections but much stronger, more long-distance connections. It isn’t just a continuous process of development. So, you start out with a system that’s very plastic but not very efficient, and that turns into a system that’s very efficient and not very plastic and flexible.

It’s interesting that that isn’t an architecture that’s typically been used in AI. But it’s an architecture that biology seems to use over and over again to implement intelligent systems. One of the questions you could ask is, how come? Why would you see this relationship? Why would you see this characteristic neural architecture, especially for highly intelligent species?

ALISON GOPNIK is a developmental psychologist at UC Berkeley. Her books include The Philosophical Baby and, most recently, The Gardener and the Carpenter: What the New Science of Child Development Tells Us About the Relationship Between Parents and Children. Alison Gopnik's Edge Bio Page

The Cul-de-Sac of the Computational Metaphor


Have we gotten into a cul-de-sac in trying to understand animals as machines from the combination of digital thinking and the crack cocaine of computation uber alles that Moore's law has provided us? What revised models of brains might we be looking at to provide new ways of thinking and studying the brain and human behavior? Did the Macy Conferences get it right? Is it time for a reboot?­­­

RODNEY BROOKS is Panasonic Professor of Robotics, emeritus, MIT; former director of the MIT Artificial Intelligence Laboratory and the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL); founder, chairman, and CTO of Rethink Robotics; and author of Flesh and Machines. Rodney Brooks's Edge Bio Page

Serpentine Galleries Extinction Marathon


This year's collaboration with the Serpentine Gallery in London was part of the "Extinction Marathon: Visions of the Future" event, which will took place in the Serpentine Sackler Gallery's extension, designed by Zaha Hadid, on Oct. 18th. The entire event which was live-streamed, will be presented on Edge.

An EDGE Conversation: "DE-EXTINCTION": Stewart Brand & Richard Prum
with Hans Ulrich Obrist & John Brockman

Does the prospect of "de-extinction" change how we think about extinction? Conservation science is shifting from being species-centric to function-centric, focussing on the overall health of ecosystems. Does the extinction of a species leave a "gap in nature" that can only be filled by returning the species to life and to the wild? Or will a functionally close relative serve? Is a de-extincted species really nothing more than a functionally close relative anyway? If it is too difficult and expensive to revive every extinct species, what are the criteria for deciding which ones to work on? Humans are the ones deciding. What ethics and aesthetics should guide those decisions?


STEWART BRAND is the Founder of the "The Whole Earth Catalog" and Co-founder of The Long Now Foundation and Revive and Restore; Author, Whole Earth Discipline.

Stewart Brand's Edge Bio Page

RICHARD PRUM is an Evolutionary Ornithologist at Yale University, where he is the Curator of Ornithology and Head Curator of Vertebrate Zoology in the Yale Peabody Museum of Natural History. He is working on a book about duck sex, aesthetic evolution, and the origin of beauty.

Richard Prum's Edge Bio Page

"EDGIES ON EXTINCTION": 10 Minute talks by Helena Cronin, Jennifer Jacquet, Steve Jones, and Chiara Marletto, and an EDGE discussion joined by Molly Crockett, Hans Ulrich Obrist, and John Brockman.

I dream about the sea cow or imagine what they would be like to see in the wild, but the case of the Pinta Island giant tortoise was a particularly strange feeling for me personally because I had spent many afternoons in the Galapagos Islands when I was a volunteer with the Sea Shepherd Conservation Society in Lonesome George’s den with him. If any of you visited the Galapagos, you know that you can even feed the giant tortoises that are in the Charles Darwin Research Station. This is Lonesome George here.
He lived to a ripe old age but failed, as they pointed out many times, to reproduce. Just recently, in 2012, he died, and with him the last of his species. He was couriered to the American Museum of Natural History and taxidermied there. A couple weeks ago his body was unveiled. This was the unveiling that I attended, and at this exact moment in time I can say that I was feeling a little like I am now: nervous and kind of nauseous, while everyone else seemed calm. I wasn’t prepared to see Lonesome George. Here he is taxidermied, looking out over Central Park, which was strange as well. At that moment realized that I knew the last individual of this species to go extinct. That presents this strange predicament for us to be in in the 21st century—this idea of conspicuous extinction.

JENNIFER JACQUET is an Assistant Professor of Environmental Studies at NYU researching cooperation and the tragedy of the commons; Author, Is Shame Necessary? Jennifer Jacquet's Edge Bio Page

~ ~ ~ ~

There is a new fundamental theory of physics that’s called constructor theory, and was proposed by David Deutsch who pioneered the theory of the universe of quantum computer. David and I are working this theory together. The fundamental idea in this theory is that we formulate all laws of physics in terms of what tasks are possible, what are impossible, and why. In this theory we have an exact physical characterization of an object that has those properties, and we call that knowledge. Note that knowledge here means knowledge without knowing the subject, as in the theory of knowledge of the philosopher, Karl Popper.

We’ve just come to the conclusion that the fact that extinction is possible means that knowledge can be instantiated in our physical world. In fact, extinction is the very process by which that knowledge is disabled in its ability to remain instantiated in physical systems because there are problems that it cannot solve. With any luck that bit of knowledge can be replaced with a better one. [Continue...]

CHIARA MARLETTO is a Junior Research Fellow at Wolfson College and Postdoctoral Research Assistnat at the Materials Department at the University of Oxford. Chiara Marletto's Edge Bio Page

~ ~ ~ ~

What I wanted to talk about is somewhat of a parallel of that in human populations. If you were to go to a textbook on human biology from the time of Darwin or a bit later, you would certainly get an image that looked a bit like this. This is an image of the so-called races of humankind—racial types, as they called them. I’m not going to go into the question of whether there are real races of humankind because there aren’t. It’s interesting to note that until quite recently people assumed, and scientists assumed too, that the human species was divided into distinct groups that were biologically different from each other and had been isolated from each other for a long, long time.

Well, to some extent that was true. Until quite recently, human populations were isolated from each other. That’s changing quite quickly. [Continue...]

STEVE JONES is a Professor of Genetics at the Galton Laboratory of University College London; Author, The Lanugage of the GenesSteve Jones's Edge Bio Page

~ ~ ~ ~

... A strange thing happened on the way to a better world in pursuit of an admirable quest, that is, a world free of sex discrimination where you’re judged on your own qualities and not your sex. Truth and falsity went topsy-turvy. The truth—the silence of sex differences—became dangerous, unmentionable, and in its place the conventional wisdom, which is a ragbag of ideas that have long been extinct but are kept ghoulishly alive by popularity, became the entrenched orthodoxy influencing public thinking, agendas and policy-making, and completely crowding out science and sense.

My aim is to show you why the current orthodoxy should be abandoned and why, if you really care about a fairer world, the science does matter. It matters profoundly. I’m going to take two examples, both about the professions, because they very well epitomize the orthodox litany: how society systematically discriminates against women, and how at work they are victims of pervasive sexism. [Continue...]

HELENA CRONIN is the Co-Director of LSE's Centre for Philosophy of Natural and Social Science; Author, The Ant and the Peacock: Altruism and Sexual Selection from Darwin to Today. Helena Cronin's Edge Bio Page

~ ~ ~ ~

MOLLY CROCKETT is an Associate Professor in the Department of Experimental Psychology at the University of Oxford; Wellcome Trust Postdoctoral Fellow at the Wellcome Trust Centre for Neuroimaging. Molly Crockett's Edge Bio Page


HANS ULRICH OBRIST is the Co-director of the Serpentine Gallery in London; Author, Ways of CuratingHans Ulrich Obrist's Edge Bio Page

JOHN BROCKMAN is the Editor and Publisher of Edge.org; Chairman of Brockman, Inc.; Author, By the Late John Brockman, The Third Culture. John Brockman's Edge Bio Page


Previous Edge-Serpentine collaborations have included:

"Formulae for the 21st Century" (2007)
"The Table-Top Experiment Marathon" (2007)
"Maps For The 21st Century" (2010)
"Information Gardens" (2011)  


Edge's own contribution to the conversation will be published in February:


Subscribe to RSS - Special Events