AI That Evolves in the Wild

George Dyson [8.14.19]

I’m interested not in domesticated AI—the stuff that people are trying to sell. I'm interested in wild AI—AI that evolves in the wild. I’m a naturalist, so that’s the interesting thing to me. Thirty-four years ago there was a meeting just like this in which Stanislaw Ulam said to everybody in the room—they’re all mathematicians—"What makes you so sure that mathematical logic corresponds to the way we think?" It’s a higher-level symptom. It’s not how the brain works. All those guys knew fully well that the brain was not fundamentally logical.

We’re in a transition similar to the first Macy Conferences. The Teleological Society, which became the Cybernetics Group, started in 1943 at a time of transition, when the world was full of analog electronics at the end of World War II. We had built all these vacuum tubes and suddenly there was free time to do something with them, so we decided to make digital computers. And we had the digital revolution. We’re now at exactly the same tipping point in history where we have all this digital equipment, all these machines. Most of the time they’re doing nothing except waiting for the next single instruction. The funny thing is, now it’s happening without people intentionally. There we had a very deliberate group of people who said, "Let’s build digital machines." Now, I believe we are building analog computers in a very big way, but nobody’s organizing it; it’s just happening.

GEORGE DYSON is a historian of science and technology and author of Darwin Among the Machines and Turing’s Cathedral. George Dyson's Edge Bio Page

[ED. NOTE:] As a follow-up to the completion of the book Possible Minds: 25 Ways of Looking at AI, we are continuing the conversation as the “Possible Minds Project.” The first meeting was at Winvian Farm in Morris, CT. Over the next few months we are rolling out the fifteen talks—videos, EdgeCasts, transcripts.

From left: W. Daniel HillisNeil GershenfeldFrank WilczekDavid ChalmersRobert AxelrodTom GriffithsCaroline JonesPeter GalisonAlison GopnikJohn BrockmanGeorge DysonFreeman DysonSeth LloydRod BrooksStephen WolframIan McEwan. Project participants in absentia: George M. ChurchDaniel KahnemanAlex "Sandy" PentlandVenki RamakrishnanAndy Clark. (Click to expand photo)


AI THAT EVOLVES IN THE WILD

GEORGE DYSON: I’m not a scientist. I’ve never done science. I dropped out of high school. But I tell stories. Ian tells stories that can take us into the future wherever he wants to go, and I go into the past and find the stories that people forgot.

Alison Gopnik said how nobody reads past the one sentence in Turing’s 1950 paper. They never read past his 1936 paper to his 1939 “Systems of Logic Based on Ordinals,” which is much more interesting. It’s about non-deterministic computers, not the universal Turing machine but the second machine he wrote his thesis on in Princeton, which was the oracle machine—a non-deterministic machine. Already he realized by then that the deterministic machines were not that interesting. It was the non-deterministic machines that were interesting. Similarly, we talk about the von Neumann architecture, but von Neumann only has one patent, and that patent is for non-von Neumann architecture. It’s for a neuromorphic computer that can do anything, and he explains that, because to get a patent you have to show what it can do. And nobody reads that patent.

A Year of Conversations


[ED NOTE: Everybody’s busy these days. That’s why we have August. So, take the time to check out the EDGE conversations you may have missed that have taken place on these pages over the past 12 months. —JB]

(Conversations): Neil Gershenfeld · Frank Wilczek · Timothy Taylor · Tom Griffiths
Alison Gopnik
 · Robert Axelrod · Barbara Tversky · Caroline Jones · Freeman Dyson
Andy Clark
 · Stephen Wolfram · Daniel Kahneman · Rodney Brooks
Alexander Rose
 · Ian McEwan · David Chalmers & Daniel Dennett · Michele Gelfand
Freeman Dyson
 · Lisa Mosconi · Susan Schneider
Jonathan Rodden
 · George Dyson · Elaine Pagels
Peter Galison
 · Paul Allen/Eddie Currie · Karl Sigmund · J. Doyne Farmer

The Language of Mind

David Chalmers [8.8.19]

Will every possible intelligent system somehow experience itself or model itself as having a mind? Is the language of mind going to be inevitable in an AI system that has some kind of model of itself? If you’ve just got an AI system that's modeling the world and not bringing itself into the equation, then it may need the language of mind to talk about other people if it wants to model them and model itself from the third-person perspective. If we’re working towards artificial general intelligence, it's natural to have AIs with models of themselves, particularly with introspective self-models, where they can know what’s going on in some sense from the first-person perspective.

Say you do something that negatively affects an AI, something that in an ordinary human would correspond to damage and pain. Your AI is going to say, "Please don’t do that. That’s very bad." Introspectively, it’s a model that recognizes someone has caused one of those states it calls pain. Is it going to be an inevitable consequence of introspective self-models in AI that they start to model themselves as having something like consciousness? My own suspicion is that there's something about the mechanisms of self-modeling and introspection that are going to naturally lead to these intuitions, where an AI will model itself as being conscious. The next step is whether an AI of this kind is going to naturally experience consciousness as somehow puzzling, as something that potentially is hard to square with basic underlying mechanisms and hard to explain.

DAVID CHALMERS is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is best known for his work on consciousness, including his formulation of the "hard problem" of consciousness. David Chalmers's Edge Bio Page

Morphogenesis for the Design of Design

Neil Gershenfeld [7.31.19]

As we work on the self-reproducing assembler, and writing software that looks like hardware that respects geometry, they meet in morphogenesis. This is the thing I’m most excited about right now: the design of design. Your genome doesn’t store anywhere that you have five fingers. It stores a developmental program, and when you run it, you get five fingers. It’s one of the oldest parts of the genome. Hox genes are an example. It’s essentially the only part of the genome where the spatial order matters. It gets read off as a program, and the program never represents the physical thing it’s constructing. The morphogenes are a program that specifies morphogens that do things like climb gradients and symmetry break; it never represents the thing it’s constructing, but the morphogens then following the morphogenes give rise to you.

What’s going on in morphogenesis, in part, is compression. A billion bases can specify a trillion cells, but the more interesting thing that’s going on is almost anything you perturb in the genome is either inconsequential or fatal. The morphogenes are a curated search space where rearranging them is interesting—you go from gills to wings to flippers. The heart of success in machine learning, however you represent it, is function representation. The real progress in machine learning is learning representation. How you search hasn’t changed all that much, but how you represent search has. These morphogenes are a beautiful way to represent design. Technology today doesn’t do it. Technology today generally doesn’t distinguish genotype and phenotype in the sense that you explicitly represent what you’re designing. In morphogenesis, you never represent the thing you’re designing; it's done in a beautifully abstract way. For these self-reproducing assemblers, what we’re building is morphogenesis for the design of design. Rather than a combinatorial search over billions of degrees of freedom, you search over these developmental programs. This is one of the core research questions we’re looking at.

NEIL GERSHENFELD is the director of MIT’s Center for Bits and Atoms; founder of the global fab lab network; the author of FAB; and co-author (with Alan Gershenfeld & Joel Cutcher-Gershenfeld) of Designing Reality. Neil Gershenfeld's Edge Bio Page

Ecology of Intelligence

Frank Wilczek [7.23.19]

I don't think a singularity is imminent, although there has been quite a bit of talk about it. I don't think the prospect of artificial intelligence outstripping human intelligence is imminent because the engineering substrate just isn’t there, and I don't see the immediate prospects of getting there. I haven’t said much about quantum computing, other people will, but if you’re waiting for quantum computing to create a singularity, you’re misguided. That crossover, fortunately, will take decades, if not centuries.

There’s this tremendous drive for intelligence, but there will be a long period of coexistence in which there will be an ecology of intelligence. Humans will become enhanced in different ways and relatively trivial ways with smartphones and access to the Internet, but also the integration will become more intimate as time goes on. Younger people who interact with these devices from childhood will be cyborgs from the very beginning. They will think in different ways than current adults do.

FRANK WILCZEK is the Herman Feshbach Professor of Physics at MIT, recipient of the 2004 Nobel Prize in physics, and author of A Beautiful Question: Finding Nature’s Deep DesignFrank Wilczek's Edge Bio Page

Polythetics and the Boeing 737 MAX

Timothy Taylor [7.16.19]

A 737-badged Boeing aircraft was first certified for flight by the US Federal Aviation Authority in 1967. The aircraft was 28.6 m long and carried up to 103 passengers; in 2019, the distant descendant of that aircraft model, the 737 MAX-10, was 43.8 m long and carried 230 passengers. In between, there have been all sorts of civilian and military variants, and the plane (‘the plane’) was immensely successful (so that in 2005 one quarter of all large commercial airliners worldwide carried the 737 badge). However, certain decisions, made at the very outset, constrained how aircraft of this kind could evolve. Now, I realize by talking about descent (in a genealogical sense), and evolution (in the sense of gradual change over time), I am already potentially getting caught up in a biological metaphor—almost as if I thought 737s got together and had babies, each generation similar to but different from themselves. Manufacturing firms, who make cars, or aircraft, or computers, use the terms ‘generation’, ‘next generation’ and so on to describe salient step changes in parts of a design chain which has both continuities and discontinuities. But how do we measure these changes, and who decides (at Boeing or elsewhere) which changes are radically discontinuous? When does one artefact type become another?

TIMOTHY TAYLOR is a professor of the prehistory of humanity at the University of Vienna, and author of The Artificial Ape. Timothy Taylor's Edge Bio Page

[ED NOTE: Tim Taylor's piece is the third offering in our 2019 initiative, "The Edge Original Essay," in which we are commissioning recognized authors to write a new and original piece exclusively for publication by Edge. The first two pieces were "Childhood's End: The Digital Revolution Isn't Over But Has Turned Into Something Else" by George Dyson and "Biological and Cultural Evolution: Six Characters in Search of an Author" by Freeman Dyson. —JB]

Humans: Doing More With Less

Tom Griffiths [7.16.19]

Imagine a superintelligent system with far more computational resources than us mere humans that’s trying to make inferences about what the humans who are surrounding it—which it thinks of as cute little pets—are trying to achieve so that it is then able to act in a way that is consistent with what those human beings might want. That system needs to be able to simulate what an agent with greater constraints on its cognitive resources should be doing, and it should be able to make inferences, like the fact that we’re not able to calculate the zeros of the Riemann zeta function or discover a cure for cancer. It doesn’t mean we’re not interested in those things; it’s just a consequence of the cognitive limitations that we have.

As a parent of two small children, a problem that I face all the time is trying to figure out what my kids want—kids who are operating in an entirely different mode of computation, and having to build a kind of internal model of how a toddler’s mind works such that it’s possible to unravel that and work out that there’s a particular motivation for the very strange pattern of actions that they’re taking.

Both from the perspective of understanding human cognition and from the perspective of being able to build AI systems that can understand human cognition, it’s desirable for us to have a better model of how rational agents should act if those rational agents have limited cognitive resources. That’s something I’ve been working on for the last few years. We have an approach to thinking about this that we call resource rationality. And this is closely related to similar ideas that are being proposed in the artificial intelligence literature. One of these ideas is the notion of bounded optimality, proposed by Stuart Russell.

TOM GRIFFITHS is the Henry R. Luce Professor of Information, Technology, Consciousness, and Culture at Princeton University. He is co-author (with Brian Christian) of Algorithms to Live By. Tom Griffiths's Edge Bio Page

A Separate Kind of Intelligence

Alison Gopnik [7.10.19]

It looks as if there’s a general relationship between the very fact of childhood and the fact of intelligence. That might be informative if one of the things that we’re trying to do is create artificial intelligences or understand artificial intelligences. In neuroscience, you see this pattern of development where you start out with this very plastic system with lots of local connection, and then you have a tipping point where that turns into a system that has fewer connections but much stronger, more long-distance connections. It isn’t just a continuous process of development. So, you start out with a system that’s very plastic but not very efficient, and that turns into a system that’s very efficient and not very plastic and flexible.

It’s interesting that that isn’t an architecture that’s typically been used in AI. But it’s an architecture that biology seems to use over and over again to implement intelligent systems. One of the questions you could ask is, how come? Why would you see this relationship? Why would you see this characteristic neural architecture, especially for highly intelligent species?

ALISON GOPNIK is a developmental psychologist at UC Berkeley. Her books include The Philosophical Baby and, most recently, The Gardener and the Carpenter: What the New Science of Child Development Tells Us About the Relationship Between Parents and Children. Alison Gopnik's Edge Bio Page

Collaboration and the Evolution of Disciplines

Robert Axelrod [7.1.19]

The questions that I’ve been interested in more recently are about collaboration and what can make it succeed, also about the evolution of disciplines themselves. The part of collaboration that is well understood is that if a team has a diversity of tools and backgrounds available to them—they come from different cultures, they come from different knowledge sets—then that allows them to search a space and come up with solutions more effectively. Diversity is very good for teamwork, but the problem is that there are clearly barriers to people from diverse backgrounds working together. That part of it is not well understood. The way people usually talk about it is that they have to learn each other’s language and each other’s terminology. So, if you talk to somebody from a different field, they’re likely to use a different word for the same concept.

ROBERT AXELROD, Walgreen Professor for the Study of Human Understanding at the University of Michigan, is best known for his interdisciplinary work on the evolution of cooperation. He is author of The Evolution of Cooperation. Robert Axelrod's Edge Bio Page

The Geometry of Thought

Barbara Tversky [6.25.19]

Slowly, the significance of spatial thinking is being recognized, of reasoning with the body acting in space, of reasoning with the world as given, but even more with the things that we create in the world. Babies and other animals have amazing feats of thought, without explicit language. So do we chatterers. Still, spatial thinking is often marginalized, a special interest, like music or smell, not a central one. Yet change seems to be in the zeitgeist, not just in cognitive science, but in philosophy and neuroscience and biology and computer science and mathematics and history and more, boosted by the 2014 Nobel prize awarded to John O’Keefe and Eduard and Britt-May Moser for the remarkable discoveries of place cells, single cells in the hippocampus that code places in the world, and grid cells next door one synapse away in the entorhinal cortex that map the place cells topographically on a neural grid. If it’s in the brain, it must be real. Even more remarkably, it turns out that place cells code events and ideas and that temporal and social and conceptual relations are mapped onto grid cells. Voila: spatial thinking is the foundation of thought. Not the entire edifice, but the foundation.

The mind simplifies and abstracts. We move from place to place along paths just as our thoughts move from idea to idea along relations. We talk about actions on thoughts the way we talk about actions on objects: we place them on the table, turn them upside down, tear them apart, and pull them together. Our gestures convey those actions on thought directly. We build structures to organize ideas in our minds and things in the world, the categories and hierarchies and one-to-one correspondences and symmetries and recursions.

BARBARA TVERSKY is Professor Emerita of Psychology, Stanford University, and Professor of Psychology and Education, Columbia Teachers College. She is the author of Mind in Motion: How Action Shapes Thought. Barbara Tversky's Edge Bio Page

Pages

Subscribe to Front page feed