A Year of Conversations


[ED NOTE: Everybody’s busy these days. That’s why we have August. So, take the time to check out the EDGE conversations you may have missed that have taken place on these pages over the past 12 months. —JB]

(Conversations): Neil Gershenfeld · Frank Wilczek · Timothy Taylor · Tom Griffiths
Alison Gopnik
 · Robert Axelrod · Barbara Tversky · Caroline Jones · Freeman Dyson
Andy Clark
 · Stephen Wolfram · Daniel Kahneman · Rodney Brooks
Alexander Rose
 · Ian McEwan · David Chalmers & Daniel Dennett · Michele Gelfand
Freeman Dyson
 · Lisa Mosconi · Susan Schneider
Jonathan Rodden
 · George Dyson · Elaine Pagels
Peter Galison
 · Paul Allen/Eddie Currie · Karl Sigmund · J. Doyne Farmer

Summer Reads

[ED NOTE: The late biologist Ernst Mayr once noted that "Edge is a conversation." And the "content" of Edge is the more than 1,000 people who have connected in this way over the last twenty-two years. There is a new set of metaphors to describe ourselves, our minds, the universe, and all of the things we know in it, and it is the intellectuals with these new ideas and images, those scientists and others in the empirical world doing things and writing their own books, who drive our times. We are pleased to present our summer reading edition, consisting of the published books by members of the Edge community in the past year or so. —JB]

Books: 

Polythetics and the Boeing 737 MAX

[7.16.19]

A 737-badged Boeing aircraft was first certified for flight by the US Federal Aviation Authority in 1967. The aircraft was 28.6 m long and carried up to 103 passengers; in 2019, the distant descendant of that aircraft model, the 737 MAX-10, was 43.8 m long and carried 230 passengers. In between, there have been all sorts of civilian and military variants, and the plane (‘the plane’) was immensely successful (so that in 2005 one quarter of all large commercial airliners worldwide carried the 737 badge). However, certain decisions, made at the very outset, constrained how aircraft of this kind could evolve. Now, I realize by talking about descent (in a genealogical sense), and evolution (in the sense of gradual change over time), I am already potentially getting caught up in a biological metaphor—almost as if I thought 737s got together and had babies, each generation similar to but different from themselves. Manufacturing firms, who make cars, or aircraft, or computers, use the terms ‘generation’, ‘next generation’ and so on to describe salient step changes in parts of a design chain which has both continuities and discontinuities. But how do we measure these changes, and who decides (at Boeing or elsewhere) which changes are radically discontinuous? When does one artefact type become another?

TIMOTHY TAYLOR is a professor of the prehistory of humanity at the University of Vienna, and author of The Artificial Ape. Timothy Taylor's Edge Bio Page

[ED NOTE: Tim Taylor's piece is the third offering in our 2019 initiative, "The Edge Original Essay," in which we are commissioning recognized authors to write a new and original piece exclusively for publication by Edge. The first two pieces were "Childhood's End: The Digital Revolution Isn't Over But Has Turned Into Something Else" by George Dyson and "Biological and Cultural Evolution: Six Characters in Search of an Author" by Freeman Dyson. —JB]

The Geometry of Thought

[6.25.19]

Slowly, the significance of spatial thinking is being recognized, of reasoning with the body acting in space, of reasoning with the world as given, but even more with the things that we create in the world. Babies and other animals have amazing feats of thought, without explicit language. So do we chatterers. Still, spatial thinking is often marginalized, a special interest, like music or smell, not a central one. Yet change seems to be in the zeitgeist, not just in cognitive science, but in philosophy and neuroscience and biology and computer science and mathematics and history and more, boosted by the 2014 Nobel prize awarded to John O’Keefe and Eduard and Britt-May Moser for the remarkable discoveries of place cells, single cells in the hippocampus that code places in the world, and grid cells next door one synapse away in the entorhinal cortex that map the place cells topographically on a neural grid. If it’s in the brain, it must be real. Even more remarkably, it turns out that place cells code events and ideas and that temporal and social and conceptual relations are mapped onto grid cells. Voila: spatial thinking is the foundation of thought. Not the entire edifice, but the foundation.

The mind simplifies and abstracts. We move from place to place along paths just as our thoughts move from idea to idea along relations. We talk about actions on thoughts the way we talk about actions on objects: we place them on the table, turn them upside down, tear them apart, and pull them together. Our gestures convey those actions on thought directly. We build structures to organize ideas in our minds and things in the world, the categories and hierarchies and one-to-one correspondences and symmetries and recursions.

BARBARA TVERSKY is Professor Emerita of Psychology, Stanford University, and Professor of Psychology and Education, Columbia Teachers College. She is the author of Mind in Motion: How Action Shapes Thought. Barbara Tversky's Edge Bio Page

Perception As Controlled Hallucination

Predictive Processing and the Nature of Conscious Experience
[6.6.19]

Perception itself is a kind of controlled hallucination. . . . [T]he sensory information here acts as feedback on your expectations. It allows you to often correct them and to refine them. But the heavy lifting seems to be being done by the expectations. Does that mean that perception is a controlled hallucination? I sometimes think it would be good to flip that and just think that hallucination is a kind of uncontrolled perception. 

ANDY CLARK is professor of Cognitive Philosophy at the University of Sussex and author of Surfing Uncertainty: Prediction, Action, and the Embodied MindAndy Clark's Edge Bio Page

On Edge

Foreword to "The Last Unknowns"
[5.22.19]

Introduction

On June 4th, HarperCollins is publishing the final book in the Edge Annual Question series entitled The Last Unknowns: Deep, Elegant, Profound Unanswered Questions About the Universe, the Mind, the Future of Civilization, and the Meaning of Life. I am pleased to publish the foreword to the book by Nobel Laureate Daniel Kahneman, author of Thinking, Fast and Slow, and a frequent participant in Edge events (presenter of the first Edge Master Class on "Thinking About Thinking" in 2007;  co-presenter, with colleagues Richard Thaler and Sendhil Mullainathan, of the second Master Class, "A Short Course in Behavioral Economics" in 2008. Below, please find Daniel Kahneman's foreword to The Last Unknowns and the table of contents of the 282 contributors. Thanks to all for your support and attention in this interesting and continuing group endeavor.   

John Brockman
Editor, Edge


ON EDGE
by Daniel Kahneman

It seems like yesterday, but Edge has been up and running for twenty-two years. Twenty-two years in which it has channeled a fast-flowing river of ideas from the academic world to the intellectually curious public. The range of topics runs from the cosmos to the mind and every piece allows the reader at least a glimpse and often a serious look at the intellectual world of a thought leader in a dynamic field of science. Presenting challenging thoughts and facts in jargon-free language has also globalized the trade of ideas across scientific disciplines. Edge is a site where anyone can learn, and no one can be bored.

The statistics are awesome: The Edge conversation is a "manuscript" of close to 10 million words, with nearly 1,000 contributors whose work and ideas are presented in more than 350 hours of video, 750 transcribed conversations, and thousands of brief essays. And these activities have resulted in the publication of 19 printed volumes of short essays and lectures in English and in foreign language editions throughout the world.

The public response has been equally impressive: Edge's influence is evident in its Google Page Rank of  "8", the same as The Atlantic, The Economist, The New Yorker, The Wall Street Journal, and the Washington Post, in the enthusiastic reviews in major general-interest outlets, and in the more than 700,000 books sold. 

Of course, none of this would have been possible without the increasingly eager participation of scientists in the Edge enterprise. And a surprise: brilliant scientists can also write brilliantly! Answering the Edge question evidently became part of the annual schedule of many major figures in diverse fields of research, and the steadily growing number of responses is another measure of the growing influence of the Edge phenomenon. Is now the right time to stop? Many readers and writers will miss further installments of the annual Edge question—they should be on the lookout for the next form in which the Edge spirit will manifest itself.

Epistemic Virtues

[8.21.19]

I’m interested in the question of epistemic virtues, their diversity, and the epistemic fears that they’re designed to address. By epistemic I mean how we gain and secure knowledge. What I’d like to do here is talk about what we might be afraid of, where our knowledge might go astray, and what aspects of our fears about how what might misfire can be addressed by particular strategies, and then to see how that’s changed quite radically over time.

~~

James Clerk Maxwell, just by way of background, had done these very mechanical representations of electromagnetism—gears and ball bearings, and strings and rubber bands. He loved doing that. He’s also the author of the most abstract treatise on electricity and magnetism, which used the least action principle and doesn’t go by the pictorial, sensorial path at all. In this very short essay, he wrote, "Some people gain their understanding of the world by symbols and mathematics. Others gain their understanding by pure geometry and space. There are some others that find an acceleration in the muscular effort that is brought to them in understanding, in feeling the force of objects moving through the world. What they want are words of power that stir their souls like the memory of childhood. For the sake of persons of these different types, whether they want the paleness and tenuity of mathematical symbolism, or they want the robust aspects of this muscular engagement, we should present all of these ways. It’s the combination of them that give us our best access to truth." 

PETER GALISON is a science historian; Joseph Pellegrino University Professor and co-founder of the Black Hole Initiative at Harvard University; and author of Einstein's Clocks and Poincaré’s Maps: Empires of Time. Peter Galison's Edge Bio Page

[ED. NOTE:] As a follow-up to the completion of the book Possible Minds: 25 Ways of Looking at AI, we are continuing the conversation as the “Possible Minds Project.” The first meeting was at Winvian Farm in Morris, CT. Over the next few months we are rolling out the fifteen talks—videos, EdgeCasts, transcripts.

From left: W. Daniel HillisNeil GershenfeldFrank WilczekDavid ChalmersRobert AxelrodTom GriffithsCaroline JonesPeter GalisonAlison GopnikJohn BrockmanGeorge DysonFreeman DysonSeth LloydRod BrooksStephen WolframIan McEwan. Project participants in absentia: George M. ChurchDaniel KahnemanAlex "Sandy" PentlandVenki RamakrishnanAndy Clark. (Click to expand photo)


EPISTEMIC VIRTUES

PETER GALISON: I’m interested in the question of epistemic virtues, their diversity, and the epistemic fears that they’re designed to address. By epistemic I mean how we gain and secure knowledge. What I’d like to do here is talk about what we might be afraid of, where our knowledge might go astray, and what aspects of our fears about how what might misfire can be addressed by particular strategies, and then to see how that’s changed quite radically over time.

The place where Lorraine Daston and I focused in the study of objectivity, for example, was in these atlases, these compendia of scientific images that gave you the basic working objects of different domains—atlases of clouds, atlases of skulls, atlases of plants, atlases in the later period of elementary particles. These are volumes, literary objects, and eventually digital objects that were used to help classify and organize the ground objects of different scientific domains.

AI That Evolves in the Wild

[8.14.19]

I’m interested not in domesticated AI—the stuff that people are trying to sell. I'm interested in wild AI—AI that evolves in the wild. I’m a naturalist, so that’s the interesting thing to me. Thirty-four years ago there was a meeting just like this in which Stanislaw Ulam said to everybody in the room—they’re all mathematicians—"What makes you so sure that mathematical logic corresponds to the way we think?" It’s a higher-level symptom. It’s not how the brain works. All those guys knew fully well that the brain was not fundamentally logical.

We’re in a transition similar to the first Macy Conferences. The Teleological Society, which became the Cybernetics Group, started in 1943 at a time of transition, when the world was full of analog electronics at the end of World War II. We had built all these vacuum tubes and suddenly there was free time to do something with them, so we decided to make digital computers. And we had the digital revolution. We’re now at exactly the same tipping point in history where we have all this digital equipment, all these machines. Most of the time they’re doing nothing except waiting for the next single instruction. The funny thing is, now it’s happening without people intentionally. There we had a very deliberate group of people who said, "Let’s build digital machines." Now, I believe we are building analog computers in a very big way, but nobody’s organizing it; it’s just happening.

GEORGE DYSON is a historian of science and technology and author of Darwin Among the Machines and Turing’s Cathedral. George Dyson's Edge Bio Page

The Language of Mind

[8.8.19]

Will every possible intelligent system somehow experience itself or model itself as having a mind? Is the language of mind going to be inevitable in an AI system that has some kind of model of itself? If you’ve just got an AI system that's modeling the world and not bringing itself into the equation, then it may need the language of mind to talk about other people if it wants to model them and model itself from the third-person perspective. If we’re working towards artificial general intelligence, it's natural to have AIs with models of themselves, particularly with introspective self-models, where they can know what’s going on in some sense from the first-person perspective.

Say you do something that negatively affects an AI, something that in an ordinary human would correspond to damage and pain. Your AI is going to say, "Please don’t do that. That’s very bad." Introspectively, it’s a model that recognizes someone has caused one of those states it calls pain. Is it going to be an inevitable consequence of introspective self-models in AI that they start to model themselves as having something like consciousness? My own suspicion is that there's something about the mechanisms of self-modeling and introspection that are going to naturally lead to these intuitions, where an AI will model itself as being conscious. The next step is whether an AI of this kind is going to naturally experience consciousness as somehow puzzling, as something that potentially is hard to square with basic underlying mechanisms and hard to explain.

DAVID CHALMERS is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is best known for his work on consciousness, including his formulation of the "hard problem" of consciousness. David Chalmers's Edge Bio Page

Morphogenesis for the Design of Design

[7.31.19]

As we work on the self-reproducing assembler, and writing software that looks like hardware that respects geometry, they meet in morphogenesis. This is the thing I’m most excited about right now: the design of design. Your genome doesn’t store anywhere that you have five fingers. It stores a developmental program, and when you run it, you get five fingers. It’s one of the oldest parts of the genome. Hox genes are an example. It’s essentially the only part of the genome where the spatial order matters. It gets read off as a program, and the program never represents the physical thing it’s constructing. The morphogenes are a program that specifies morphogens that do things like climb gradients and symmetry break; it never represents the thing it’s constructing, but the morphogens then following the morphogenes give rise to you.

What’s going on in morphogenesis, in part, is compression. A billion bases can specify a trillion cells, but the more interesting thing that’s going on is almost anything you perturb in the genome is either inconsequential or fatal. The morphogenes are a curated search space where rearranging them is interesting—you go from gills to wings to flippers. The heart of success in machine learning, however you represent it, is function representation. The real progress in machine learning is learning representation. How you search hasn’t changed all that much, but how you represent search has. These morphogenes are a beautiful way to represent design. Technology today doesn’t do it. Technology today generally doesn’t distinguish genotype and phenotype in the sense that you explicitly represent what you’re designing. In morphogenesis, you never represent the thing you’re designing; it's done in a beautifully abstract way. For these self-reproducing assemblers, what we’re building is morphogenesis for the design of design. Rather than a combinatorial search over billions of degrees of freedom, you search over these developmental programs. This is one of the core research questions we’re looking at.

NEIL GERSHENFELD is the director of MIT’s Center for Bits and Atoms; founder of the global fab lab network; the author of FAB; and co-author (with Alan Gershenfeld & Joel Cutcher-Gershenfeld) of Designing Reality. Neil Gershenfeld's Edge Bio Page

Pages

Subscribe to Edge.org RSS