The Brain Is Full of Maps

Freeman Dyson [6.11.19]

 I was talking about maps and feelings, and whether the brain is analog or digital. I’ll give you a little bit of what I wrote:

Brains use maps to process information. Information from the retina goes to several areas of the brain where the picture seen by the eye is converted into maps of various kinds. Information from sensory nerves in the skin goes to areas where the information is converted into maps of the body. The brain is full of maps. And a big part of the activity is transferring information from one map to another.

As we know from our own use of maps, mapping from one picture to another can be done either by digital or by analog processing. Because digital cameras are now cheap and film cameras are old fashioned and rapidly becoming obsolete, many people assume that the process of mapping in the brain must be digital. But the brain has been evolving over millions of years and does not follow our ephemeral fashions. A map is in its essence an analog device, using a picture to represent another picture. The imaging in the brain must be done by direct comparison of pictures rather than by translations of pictures into digital form.

FREEMAN DYSON, emeritus professor of physics at the Institute for Advanced Study in Princeton, has worked on nuclear reactors, solid-state physics, ferromagnetism, astrophysics, and biology, looking for problems where elegant mathematics could be usefully applied. His books include Disturbing the UniverseWeapons and HopeInfinite in All Directions, and Maker of PatternsFreeman Dyson's Edge Bio Page

[ED. NOTE:] As a follow-up to the completion of the book Possible Minds: 25 Ways of Looking at AI, we are continuing the conversation as the “Possible Minds Project.” The first meeting was at Winvian Farm in Morris, CT. Over the next few months we are rolling out the fifteen talks—videos, EdgeCasts, transcripts.

From left: W. Daniel HillisNeil GershenfeldFrank WilczekDavid ChalmersRobert AxelrodTom GriffithsCaroline JonesPeter GalisonAlison GopnikJohn BrockmanGeorge DysonFreeman DysonSeth LloydRod BrooksStephen WolframIan McEwan. Project participants in absentia: George M. ChurchDaniel KahnemanAlex "Sandy" PentlandVenki RamakrishnanAndy Clark. (Click to expand photo)


THE BRAIN IS FULL OF MAPS

FREEMAN DYSON: I was talking about maps and feelings, and whether the brain is analog or digital. I’ll give you a little bit of what I wrote:

Brains use maps to process information. Information from the retina goes to several areas of the brain where the picture seen by the eye is converted into maps of various kinds. Information from sensory nerves in the skin goes to areas where the information is converted into maps of the body. The brain is full of maps. And a big part of the activity is transferring information from one map to another.

As we know from our own use of maps, mapping from one picture to another can be done either by digital or by analog processing. Because digital cameras are now cheap and film cameras are old fashioned and rapidly becoming obsolete, many people assume that the process of mapping in the brain must be digital. But the brain has been evolving over millions of years and does not follow our ephemeral fashions. A map is in its essence an analog device, using a picture to represent another picture. The imaging in the brain must be done by direct comparison of pictures rather than by translations of pictures into digital form.

Introspection tells us our brains are spectacularly quick, transforming two tasks essential to our survival: recognition of images in space, and recognition of patterns of sound in time. We recognize a human face or a snake in the grass in a fraction of a second. We recognize the sound of a voice or of a footstep equally fast. The process of recognition requires the comparison of a perceived image with an enormous database of remembered images. How this is done, in a quarter of a second without any conscious effort, we have no idea. It seems likely that scanning of images in associative memory is done by direct comparison of analog data rather than by digitization.

Perception As Controlled Hallucination

Predictive Processing and the Nature of Conscious Experience Andy Clark [6.6.19]

Perception itself is a kind of controlled hallucination. . . . [T]he sensory information here acts as feedback on your expectations. It allows you to often correct them and to refine them. But the heavy lifting seems to be being done by the expectations. Does that mean that perception is a controlled hallucination? I sometimes think it would be good to flip that and just think that hallucination is a kind of uncontrolled perception. 

ANDY CLARK is professor of Cognitive Philosophy at the University of Sussex and author of Surfing Uncertainty: Prediction, Action, and the Embodied MindAndy Clark's Edge Bio Page


PERCEPTION AS CONTROLLED HALLUCINATION: PREDICTIVE PROCESSING AND THE NATURE OF CONSCIOUS EXPERIENCE

The big question that I keep asking myself at the moment is whether it's possible that predictive processing, the vision of the predictive mind I've been working on lately, is as good as it seems to be. It keeps me awake a little bit at night wondering whether anything could touch so many bases as this story seems to. It looks to me as if it provides a way of moving towards a third generation of artificial intelligence. I'll come back to that in a minute. It also looks to me as if it shows how the stuff that I've been interested in for so long, in terms of the extended mind and embodied cognition, can be both true and scientifically tractable, and how we can get something like a quantifiable grip on how neural processing weaves together with bodily processing weaves together with actions out there in the world. It also looks as if this might give us a grip on the nature of conscious experience. And if any theory were able to do all of those things, it would certainly be worth taking seriously. I lie awake wondering whether any theory could be so good as to be doing all these things at once, but that's what we'll be talking about.

A place to start that was fun to read and watch was the debate between Dan Dennett and Dave Chalmers about "Possible Minds" ("Is Superintelligence Impossible?" Edge, 4.10.19). That debate was structured around questions about superintelligence, the future of artificial intelligence, whether or not some of our devices or machines are going to outrun human intelligence and perhaps in either good or bad ways become alien intelligences that cohabit the earth with us. That debate hit on all kinds of important aspects of that space, but it seemed to leave out what looks to be the thing that predictive processing is most able to shed light on, which is the role of action in all of these unfoldings.

Mining the Computational Universe

Stephen Wolfram [5.30.19]

I've spent several decades creating a computational language that aims to give a precise symbolic representation for computational thinking, suitable for use by both humans and machines. I'm interested in figuring out what can happen when a substantial fraction of humans can communicate in computational language as well as human language. It's clear that the introduction of both human spoken language and human written language had important effects on the development of civilization. What will now happen (for both humans and AI) when computational language spreads?

STEPHEN WOLFRAM is a scientist, inventor, and the founder and CEO of Wolfram Research. He is the creator of the symbolic computation program Mathematica and its programming language, Wolfram Language, as well as the knowledge engine Wolfram|Alpha. He is also the author of A New Kind of Science. Stephen Wolfram's Edge Bio Page


MINING THE COMPUTATIONAL UNIVERSE

STEPHEN WOLFRAM: I thought I would talk about my current thinking about computation and our interaction with it. The first question is, how common is computation? People have the general view that to make something do computation requires a lot of effort, and you have to build microprocessors and things like this. One of the things that I discovered a long time ago is that it’s very easy to get sophisticated computation.

I’ve studied cellular automata, studied Turing machines and other kinds of things—as soon as you have a system whose behavior is not obviously simple, you end up getting something that is as sophisticated computationally as it can be. This is something that is not an obvious fact. I call it the principle of computational equivalence. At some level, it’s a thing for which one can get progressive evidence. You just start looking at very simple systems, whether they’re cellular automata or Turing machines, and you say, "Does the system do sophisticated computation or not?" The surprising discovery is that as soon as what it’s doing is not something that you can obviously decode, then one can see, in particular cases at least, that it is capable of doing as sophisticated computation as anything. For example, it means it’s a universal computer.

 

REMEMBERING MURRAY

Murray Gell-Mann [5.28.19]
Introduction by

MURRAY GELL-MANN
September 15, 1929 – May 24, 2019
  

[ED. NOTE: Upon learning of the death of long-time friend, and colleague Murray Gell-Mann, I posed the question below to the Edgies who knew and/or worked with him. —JB]

Can you tell us a personal story about Murray and yourself (about physics, or not)?  


THE REALITY CLUB
Leonard Susskind, George Dyson, Stuart Kauffman, John Brockman, Julian Barbour, Freeman Dyson, Neil Gershenfeld, Paul Davies, Virginia Louise Trimble, Alan Guth, Gino Segre, Sara Lippincott, Emanuel Derman, Jeremy Bernstein, George Johnson, Seth Lloyd, W. Brian Arthur, W. Daniel Hillis, Frank Tipler, Karl Sabbagh, Daniel C. Dennett


[ED. NOTE: For starters, here's a story Murray told about himself when I spent time with him in Santa Fe over Christmas vacation in 2003, excerpted from "The Making of a PhysicistEdge, June 3, 2003—JB]

Uncharacteristically, I discussed my application to Yale with my father, who asked, "What were you thinking of putting down?" I said, "Whatever would be appropriate for archaeology or linguistics, or both, because those are the things I'm most enthusiastic about. I'm also interested in natural history and exploration."

He said, "You'll starve!"

After all, this was 1944 and his experiences with the Depression were still quite fresh in his mind; we were still living in genteel poverty. He could have quit his job as the vault custodian in a bank and taken a position during the war that would have utilized his talents — his skill in mathematics, for example — but he didn't want to take the risk of changing jobs. He felt that after the war he would regret it, so he stayed where he was. This meant that we really didn't have any spare money at all.

I asked him, "What would you suggest?"

He mentioned engineering, to which I replied, "I'd rather starve. If I designed anything it would fall apart." And sure enough when I took an aptitude test a year later I was advised to take up nearly anything but engineering."

Then my father suggested, "Why don't we compromise — on physics?"


Introduction
By Geoffrey West

Murray Gell-Mann was one of the great scientists of the 20th century, one of its few renaissance people and a true polymath. He is best known for his seminal contributions to fundamental physics, for helping to bring order and symmetry to the apparently chaotic world of the elementary particles and the fundamental forces of nature. He dominated the field from the early ‘50s, when he was still in his twenties, up through the late ‘70s. Basically, he ran the show. By modern standards he didn’t publish a lot, but when he did we all hung on every word. It is an amazing litany of accomplishments: strangeness, the renormalization group, color and quantum chromodynamics, and of course, quarks and SU(3), for which he won the Nobel prize in 1969.

He was the Robert Andrews Millikan Professor Emeritus of Theoretical Physics at the California Institute of Technology, a cofounder of the Santa Fe Institute, where he was a Distinguished Fellow; a former director of the J.D. and C.T. MacArthur Foundation; one of the Global Five Hundred honored by the U.N. Environment Program; a former Citizen Regent of the Smithsonian Institution; a former member of the President's Committee of Advisors on Science and Technology; and the author of The Quark and the Jaguar: Adventures in the Simple and the Complex.

Despite his extraordinary contributions to high-energy physics, Murray maintained throughout his life an enduring passion for understanding how the messy world of culture, economies, ecologies and human interaction, and especially language, evolved from the beautifully ordered world of the fundamental laws of nature. How did complexity evolve from simplicity? Can we develop a generic science of complex adaptive systems? In the ‘80s he helped found the Santa Fe Institute as a hub on the academic landscape for addressing such vexing questions in a radically transdisciplinary environment.

Murray Gell-Mann knew, understood and was interested in everything, spoke every language on the planet, and probably those on other planets too, and was not shy in letting you know that he did. He was infamous not just for correcting your facts or your logic, but most annoyingly to some, for correcting how you should pronounce your name, your place of birth, or whatever. Luckily my name is West but that never stopped him from lecturing me many times on the Somerset dialect that I spoke as a young child.

Although he decidedly did not suffer fools and would harshly, sometimes almost cruelly, criticize sloppy thinking or incorrect factual statements, he would intensely engage with anyone regardless of their status or standing if he felt they had something to contribute. I rarely felt comfortable when discussing anything with him, whether a question of physics or lending him money, expecting to be clobbered at any moment because I had made some stupid comment or pronounced something wrong.

Murray could be a very difficult man…but what a mind! However, he loved to collaborate, to discuss ideas, and was amazingly open and inclusive even if he did dominate the proceedings. By the time we had become colleagues at SFI, I had become less and less sensitive to the master’s anticipated criticism or even to his occasional praise; the potential trepidation had pretty much disappeared and our relationship had evolved into friendship and collegiality, just in time for me to become his boss. Negotiating with Murray over a perplexing physics question is one thing, but try negotiating with him over salary and secretarial support, then you’ll really see him in action. To quote Hamlet: "He was a man. Take him for all in all. I shall not look upon his like again."

GEOFFREY WEST is a theoretical physcicist; Shannan Distinguished Professor and Past President, Santa Fe Institute; Author, ScaleGeoffrey West's Edge Bio page.

On Edge

Foreword to "The Last Unknowns" Daniel Kahneman [5.22.19]

Introduction

On June 4th, HarperCollins is publishing the final book in the Edge Annual Question series entitled The Last Unknowns: Deep, Elegant, Profound Unanswered Questions About the Universe, the Mind, the Future of Civilization, and the Meaning of Life. I am pleased to publish the foreword to the book by Nobel Laureate Daniel Kahneman, author of Thinking, Fast and Slow, and a frequent participant in Edge events (presenter of the first Edge Master Class on "Thinking About Thinking" in 2007;  co-presenter, with colleagues Richard Thaler and Sendhil Mullainathan, of the second Master Class, "A Short Course in Behavioral Economics" in 2008. Below, please find Daniel Kahneman's foreword to The Last Unknowns and the table of contents of the 282 contributors. Thanks to all for your support and attention in this interesting and continuing group endeavor.   

John Brockman
Editor, Edge


ON EDGE
by Daniel Kahneman

It seems like yesterday, but Edge has been up and running for twenty-two years. Twenty-two years in which it has channeled a fast-flowing river of ideas from the academic world to the intellectually curious public. The range of topics runs from the cosmos to the mind and every piece allows the reader at least a glimpse and often a serious look at the intellectual world of a thought leader in a dynamic field of science. Presenting challenging thoughts and facts in jargon-free language has also globalized the trade of ideas across scientific disciplines. Edge is a site where anyone can learn, and no one can be bored.

The statistics are awesome: The Edge conversation is a "manuscript" of close to 10 million words, with nearly 1,000 contributors whose work and ideas are presented in more than 350 hours of video, 750 transcribed conversations, and thousands of brief essays. And these activities have resulted in the publication of 19 printed volumes of short essays and lectures in English and in foreign language editions throughout the world.

The public response has been equally impressive: Edge's influence is evident in its Google Page Rank of  "8", the same as The Atlantic, The Economist, The New Yorker, The Wall Street Journal, and the Washington Post, in the enthusiastic reviews in major general-interest outlets, and in the more than 700,000 books sold. 

Of course, none of this would have been possible without the increasingly eager participation of scientists in the Edge enterprise. And a surprise: brilliant scientists can also write brilliantly! Answering the Edge question evidently became part of the annual schedule of many major figures in diverse fields of research, and the steadily growing number of responses is another measure of the growing influence of the Edge phenomenon. Is now the right time to stop? Many readers and writers will miss further installments of the annual Edge question—they should be on the lookout for the next form in which the Edge spirit will manifest itself.

The Cul-de-Sac of the Computational Metaphor

Rodney A. Brooks [5.13.19]

Have we gotten into a cul-de-sac in trying to understand animals as machines from the combination of digital thinking and the crack cocaine of computation uber alles that Moore's law has provided us? What revised models of brains might we be looking at to provide new ways of thinking and studying the brain and human behavior? Did the Macy Conferences get it right? Is it time for a reboot?­­­

RODNEY BROOKS is Panasonic Professor of Robotics, emeritus, MIT; former director of the MIT Artificial Intelligence Laboratory and the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL); founder, chairman, and CTO of Rethink Robotics; and author of Flesh and Machines. Rodney Brooks's Edge Bio Page


THE CUL-DE-SAC OF THE COMPUTATIONAL METAPHOR

RODNEY BROOKS: I’m going to go over a wide range of things that everyone will likely find something to disagree with. I want to start out by saying that I’m a materialist reductionist. As I talk, some people might get a little worried that I’m going off like Chalmers or something, but I’m not. I’m a materialist reductionist.

I’m worried that the crack cocaine of Moore’s law, which has given us more and more computation, has lulled us into thinking that that’s all there is. When you look at Claus Pias’s introduction to the Macy Conferences book, he writes, "The common precondition of the three foundational concepts of cybernetics—switching (Boolean) algebra, information theory and feedback—is digitality." They go straight into digitality in this conference. He says, "We considered Turing’s universal machine as a 'model' for brains, employing Pitts' and McCulloch’s calculus for activity in neural nets." Anyone who has looked at the Pitts and McCulloch papers knows it's a very primitive view of what is happening in neurons. But they adopted Turing’s universal machine.

Machines Like Me

Ian McEwan [4.16.19]

I would like to set aside the technological constraints in order to imagine how an embodied artificial consciousness might negotiate the open system of human ethics—not how people think they should behave, but how they do behave. For example, we may think the rule of law is preferable to revenge, but matters get blurred when the cause is just and we love the one who exacts the revenge.

A machine incorporating the best angel of our nature might think otherwise. The ancient dream of a plausible artificial human might be scientifically useless but culturally irresistible. At the very least, the quest so far has taught us just how complex we (and all creatures) are in our simplest actions and modes of being. There’s a semi-religious quality to the hope of creating a being less cognitively flawed than we are.

IAN MCEWAN is a novelist whose works have earned him worldwide critical acclaim. He is the recipient of the Man Booker Prize for Amsterdam (1998), the National Book Critics' Circle Fiction Award, and the Los Angeles Times Prize for Fiction for Atonement (2003). His most recent novel is Machines Like Me. Ian McEwan's Edge Bio Page


MACHINES LIKE ME

IAN MCEWAN: I feel something like an imposter here amongst so much technical expertise. I’m the breakfast equivalent of an after-dinner mint.

What’s been preoccupying me the last two or three years is what it would be like to live with a fully embodied artificial consciousness, which means leaping over every difficulty that we’ve heard described this morning by Rod Brooks. The building of such a thing is probably scientifically useless, much like putting a man on the moon when you could put a machine there, but it has an ancient history.

Is Superintelligence Impossible?

On Possible Minds: Philosophy and AI David Chalmers, Daniel C. Dennett [4.10.19]

[ED. NOTE: On Saturday, March 9th, more than 1200 people jammed into Pioneer Works in Red Hook, Brooklyn, for a conversation between two of our greatest philosophers, David Chalmers and Daniel C. Dennett:  "Is Superintelligence Impossible?" the next event in Edge's ongoing "Possible Minds Project." Watch the video, listen to the EdgeCast, read the transcript. Thanks  to  physicist, artist, author, and Edgie Janna LevinDirector of Sciences at Pioneer Works, who presented the event with the support of Science Sandbox, a Simons Foundation initiative. —JB]


Reality Club Discussion: Andy Clark


Somebody said that the philosopher is the one who says, "We know it’s possible in practice, we’re trying to figure out if it’s possible in principle." Unfortunately, philosophers sometimes spend too much time worrying about logical possibilities that are importantly negligible in every other regard. So, let me go on the record as saying, yes, I think that conscious AI is possible because, after all, what are we? We’re conscious. We’re robots made of robots made of robots. We’re actual. In principle, you could make us out of other materials. Some of your best friends in the future could be robots. Possible in principle, absolutely no secret ingredients, but we’re not going to see it. We’re not going to see it for various reasons. One is, if you want a conscious agent, we’ve got plenty of them around and they’re quite wonderful, whereas the ones that we would make would be not so wonderful. —Daniel C. Dennett

One of our questions here is, is superintelligence possible or impossible? I’m on the side of possible. I like the possible, which is one reason I like John’s theme, "Possible Minds." That’s a wonderful theme for thinking about intelligence, both natural and artificial, and consciousness, both natural and artificial. … The space of possible minds is absolutely vast—all the minds there ever have been, will be, or could be. Starting with the actual minds, I guess there have been a hundred billion or so humans with minds of their own. Some pretty amazing minds have been in there. Confucius, Isaac Newton, Jane Austen, Pablo Picasso, Martin Luther King, on it goes. But still, those hundred billion minds put together are just the tiniest corner of this space of possible minds. —David Chalmers

David Chalmers is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is best known for his work on consciousness, including his formulation of the “hard problem” of consciousness;  Daniel C. Dennett is University Professor and Austin B. Fletcher Professor of Philosophy and director of the Center for Cognitive Studies at Tufts University. He is the author of a dozen books, including Consciousness Explained, and, most recently, From Bacteria to Bach and Back: The Evolution of Minds;  John Brockman, moderator, is a cultural impresario whose career has encompassed the avant-garde art world, science, books, software, and the Internet. He is the author of By The Late John Brockman and The Third Culture; editor of the Edge Annual Question book series, and Possible Minds: 25 Ways of Looking at AI.

 

Cultural Intelligence

Michele Gelfand [3.12.19]

Getting back to culture being invisible and omnipresent, we think about intelligence or emotional intelligence, but we rarely think about cultivating cultural intelligence. In this ever-increasing global world, we need to understand culture. All of this research has been trying to elucidate not just how we understand other people who are different from us, but how we understand ourselves.

MICHELE GELFAND is a Distinguished University Professor at the University of Maryland, College Park. She is the author of Rule Makers, Rule Breakers: How Tight and Loose Cultures Wire the WorldMichele Gelfand's Edge Bio Page

Possible Minds

25 Ways of Looking at AI John Brockman [3.4.19]

Pages

Subscribe to Front page feed