On Chalmers and Dennett's "Is Superintelligence Impossible?"

Andy Clark [4.18.19]

Andy Clark responds to "Is Superintelligence Impossible?":

I think we can divide the space of possible AI minds into two reasonably distinct categories. One category comprises the “passive AI minds” that seemed to be the main focus of the Chalmers-Dennett exchange. These are driven by large data sets and optimize their performance relative to some externally imposed choice of “objective function” that specifies what we want them to do—win at GO, or improve paperclip manufacture. And Dennett and Chalmers are right—we do indeed need to be very careful about what we ask them to do, and about how much power they have to implement their own solutions to these pre-set puzzles.

The other category comprises active AIs with broad brush-strokes imperatives. These include Karl Friston’s Active Inference machines. AI’s like these spawn their own goals and sub-goals by environmental immersion and selective action. Such artificial agents will pursue epistemic agendas and have an Umwelt of their own. These are the only kind of AIs that may, I believe, end up being conscious of themselves and their worlds—at least in any way remotely recognizable as such to us humans. They are the AIs who could be our friends, or who could (if that blunt general imperative was played out within certain kinds of environment) become genuine enemies. It is these radicalized embodied AIs I would worry about most. At the same time (and for the same reasons) I’d greatly like to see powerful AIs from that second category emerge. For they would be real explorations within the vast space of possible minds.

ANDY CLARK is professor of philosophy and informatics at the University of Sussex; author, Surfing Uncertainty: Prediction, Action, and the Embodied Mind. Andy Clark's Edge Bio Page

Machines Like Me

Ian McEwan [4.16.19]

I would like to set aside the technological constraints in order to imagine how an embodied artificial consciousness might negotiate the open system of human ethics—not how people think they should behave, but how they do behave. For example, we may think the rule of law is preferable to revenge, but matters get blurred when the cause is just and we love the one who exacts the revenge.

A machine incorporating the best angel of our nature might think otherwise. The ancient dream of a plausible artificial human might be scientifically useless but culturally irresistible. At the very least, the quest so far has taught us just how complex we (and all creatures) are in our simplest actions and modes of being. There’s a semi-religious quality to the hope of creating a being less cognitively flawed than we are.

IAN MCEWAN is a novelist whose works have earned him worldwide critical acclaim. He is the recipient of the Man Booker Prize for Amsterdam (1998), the National Book Critics' Circle Fiction Award, and the Los Angeles Times Prize for Fiction for Atonement (2003). His most recent novel is Machines Like Me. Ian McEwan's Edge Bio Page

[ED. NOTE:] As a follow-up to the completion of the book Possible Minds: 25 Ways of Looking at AI, we are continuing the conversation as the “Possible Minds Project.” The first meeting was at Winvian Farm in Morris, CT (more on this later). Over the next few months we are rolling out the fifteen talks—videos, EdgeCasts, transcripts, beginning with Ian McEwan, whose novel, Machines Like Me, has just been published.

From left: W. Daniel Hillis, Neil Gershenfeld, Frank Wilczek, David Chalmers, Robert Axelrod, Tom Griffiths, Caroline Jones, Peter Galison, Alison Gopnik, John Brockman, George Dyson, Freeman Dyson, Seth Lloyd, Rod Brooks, Stephen Wolfram, Ian McEwan. Project participants in absentia: George M. Church, Daniel Kahneman, Alex "Sandy" Pentland, Venki Ramakrishnan. (Click to expand photo)


MACHINES LIKE ME

IAN MCEWAN: I feel something like an imposter here amongst so much technical expertise. I’m the breakfast equivalent of an after-dinner mint.

What’s been preoccupying me the last two or three years is what it would be like to live with a fully embodied artificial consciousness, which means leaping over every difficulty that we’ve heard described this morning by Rod Brooks. The building of such a thing is probably scientifically useless, much like putting a man on the moon when you could put a machine there, but it has an ancient history.

Is Superintelligence Impossible?

On Possible Minds: Philosophy and AI David Chalmers, Daniel C. Dennett [4.10.19]

[ED. NOTE: On Saturday, March 9th, more than 1200 people jammed into Pioneer Works in Red Hook, Brooklyn, for a conversation between two of our greatest philosophers, David Chalmers and Daniel C. Dennett:  "Is Superintelligence Impossible?" the next event in Edge's ongoing "Possible Minds Project." Watch the video, listen to the EdgeCast, read the transcript. Thanks  to  physicist, artist, author, and Edgie Janna LevinDirector of Sciences at Pioneer Works, who presented the event with the support of Science Sandbox, a Simons Foundation initiative. —JB]

Somebody said that the philosopher is the one who says, "We know it’s possible in practice, we’re trying to figure out if it’s possible in principle." Unfortunately, philosophers sometimes spend too much time worrying about logical possibilities that are importantly negligible in every other regard. So, let me go on the record as saying, yes, I think that conscious AI is possible because, after all, what are we? We’re conscious. We’re robots made of robots made of robots. We’re actual. In principle, you could make us out of other materials. Some of your best friends in the future could be robots. Possible in principle, absolutely no secret ingredients, but we’re not going to see it. We’re not going to see it for various reasons. One is, if you want a conscious agent, we’ve got plenty of them around and they’re quite wonderful, whereas the ones that we would make would be not so wonderful. —Daniel C. Dennett

One of our questions here is, is superintelligence possible or impossible? I’m on the side of possible. I like the possible, which is one reason I like John’s theme, "Possible Minds." That’s a wonderful theme for thinking about intelligence, both natural and artificial, and consciousness, both natural and artificial. … The space of possible minds is absolutely vast—all the minds there ever have been, will be, or could be. Starting with the actual minds, I guess there have been a hundred billion or so humans with minds of their own. Some pretty amazing minds have been in there. Confucius, Isaac Newton, Jane Austen, Pablo Picasso, Martin Luther King, on it goes. But still, those hundred billion minds put together are just the tiniest corner of this space of possible minds. —David Chalmers

__

David Chalmers is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is best known for his work on consciousness, including his formulation of the “hard problem” of consciousness;  Daniel C. Dennett is University Professor and Austin B. Fletcher Professor of Philosophy and director of the Center for Cognitive Studies at Tufts University. He is the author of a dozen books, including Consciousness Explained, and, most recently, From Bacteria to Bach and Back: The Evolution of Minds;  John Brockman, moderator, is a cultural impresario whose career has encompassed the avant-garde art world, science, books, software, and the Internet. He is the author of By The Late John Brockman and The Third Culture; editor of the Edge Annual Question book series, and Possible Minds: 25 Ways of Looking at AI.

The Overdue Debate


Front Page

DEFGH Nr. 63, Freitag, 15. März 2019. 

Collage: Stefan Dimitrov

The Ghost in the Machine
Artificial intelligence inspires wild fantasies, but remains hard to imagine. A SZ series creates clarity. 

__________________________________________________

 
                    
 

__________________________________________________

Artificial intelligence:
A new series brings science and culture together to fathom the inexplicable

​_________________________________________________

Cultural Intelligence

Michele Gelfand [3.12.19]

Getting back to culture being invisible and omnipresent, we think about intelligence or emotional intelligence, but we rarely think about cultivating cultural intelligence. In this ever-increasing global world, we need to understand culture. All of this research has been trying to elucidate not just how we understand other people who are different from us, but how we understand ourselves.

MICHELE GELFAND is a Distinguished University Professor at the University of Maryland, College Park. She is the author of Rule Makers, Rule Breakers: How Tight and Loose Cultures Wire the WorldMichele Gelfand's Edge Bio Page

Subscribe

Biological and Cultural Evolution

Six Characters in Search of an Author Freeman Dyson [2.19.19]

 

[ ED. NOTE: With the following essay by Freeman Dyson, we're kicking off a regular subscription-based audio feature, EdgeCast. Listen & Subscribe —JB ]

In the near future, we will be in possession of genetic engineering technology which allows us to move genes precisely and massively from one species to another. Careless or commercially driven use of this technology could make the concept of species meaningless, mixing up populations and mating systems so that much of the individuality of species would be lost. Cultural evolution gave us the power to do this. To preserve our wildlife as nature evolved it, the machinery of biological evolution must be protected from the homogenizing effects of cultural evolution.

Unfortunately, the first of our two tasks, the nurture of a brotherhood of man, has been made possible only by the dominant role of cultural evolution in recent centuries. The cultural evolution that damages and endangers natural diversity is the same force that drives human brotherhood through the mutual understanding of diverse societies. Wells's vision of human history as an accumulation of cultures, Dawkins's vision of memes bringing us together by sharing our arts and sciences, Pääbo's vision of our cousins in the cave sharing our language and our genes, show us how cultural evolution has made us what we are. Cultural evolution will be the main force driving our future.

FREEMAN DYSON, emeritus professor of physics at the Institute for Advanced Study in Princeton, in addition to fundamental contributions ranging from number theory to quantum electrodynamics, has worked on nuclear reactors, solid-state physics, ferromagnetism, astrophysics, and biology, looking for problems where elegant mathematics could be usefully applied. His books include Disturbing the UniverseWeapons and HopeInfinite in All DirectionsMaker of Patterns, and Origins of LifeFreeman Dyson's Edge Bio Page 


BIOLOGICAL AND CULTURAL EVOLUTION: SIX CHARACTERS IN SEARCH OF AN AUTHOR

In the Pirandello play, "Six Characters in Search of an Author", the six characters come on stage, one after another, each of them pushing the story in a different unexpected direction. I use Pirandello's title as a metaphor for the pioneers in our understanding of the concept of evolution over the last two centuries. Here are my six characters with their six themes.

1. Charles Darwin (1809-1882): The Diversity Paradox.
2. Motoo Kimura (1924-1994): Smaller Populations Evolve Faster.
3. Ursula Goodenough (1943- ): Nature Plays a High-Risk Game.
4. Herbert Wells (1866-1946): Varieties of Human Experience.
5. Richard Dawkins (1941- ): Genes and Memes.
6. Svante Pääbo (1955- ): Cousins in the Cave.

The story that they are telling is of a grand transition that occurred about fifty thousand years ago, when the driving force of evolution changed from biology to culture, and the direction changed from diversification to unification of species. The understanding of this story can perhaps help us to deal more wisely with our responsibilities as stewards of our planet.

Alzheimer's Prevention

Lisa Mosconi [2.11.19]

Right now, we don’t have therapies that regrow neurons. Alzheimer’s is a disease that kills your neurons over time, so once they’re gone they’re pretty much gone. There are things that one can do pharmaceutically to ameliorate the symptoms. For example, there are FDA-approved drugs such as acetylcholinesterase inhibitors or memantine, which do lessen or stabilize symptoms for a few years, but they can’t stop disease progression. What we’re interested in is disease modification, stopping it before it’s too severe or too advanced.

At the Alzheimer’s Prevention Clinic, we try to tell people what to do in a preventative way. There are a lot of other people and clinicians that are actively engaging in prevention as well. It’s new in my field, especially in the field of neurology. Until four years ago nobody would dare use the word “prevention” out loud because so many doctors and clinicians would just label you as a quack right away and you would lose credibility overnight. I find scientists are much more open to this now.

LISA MOSCONI is the director of the Women's Brain Initiative and the associate director of the Alzheimer's Prevention Clinic at Weill Cornell Medical College. She is the author of Brain Food: The Surprising Science of Eating for Cognitive PowerLisa Mosconi's Edge Bio Page

The Future of the Mind

How AI Technology Could Reshape the Human Mind and Create Alternate Synthetic Minds Susan Schneider [1.28.19]

I see many misunderstandings in current discussions about the nature of the mind, such as the assumption that if we create sophisticated AI, it will inevitably be conscious. There is also this idea that we should “merge with AI”—that in order for humans to keep up with developments in AI and not succumb to hostile superintelligent AIs or AI-based technological unemployment, we need to enhance our own brains with AI technology.

One thing that worries me about all this is that don't think AI companies should be settling issues involving the shape of the mind. The future of the mind should be a cultural decision and an individual decision. Many of the issues at stake here involve classic philosophical problems that have no easy solutions. I’m thinking, for example, of theories of the nature of the person in the field of metaphysics. Suppose that you add a microchip to enhance your working memory, and then years later you add another microchip to integrate yourself with the Internet, and you just keep adding enhancement after enhancement. At what point will you even be you? When you think about enhancing the brain, the idea is to improve your life—to make you smarter, or happier, maybe even to live longer, or have a sharper brain as you grow older—but what if those enhancements change us in such drastic ways that we’re no longer the same person?

SUSAN SCHNEIDER holds the Distinguished Scholar chair at the Library of Congress and is the director of the AI, Mind and Society (“AIMS”) Group at the University of Connecticut. Susan Schneider's Edge Bio Page

The Urban-Rural Divide

Why Geography Matters Jonathan Rodden [1.16.19]

In the past, it was dispersed rural interest groups who favored free trade, and concentrated urban producers who wanted protection for their new industries. Now, in the age of the knowledge economy, the relationship has reversed. Much of manufacturing now takes place outside of city centers. Ever since the New Deal and the rise of labor unions, manufacturing has been moving away from city centers and spreading out to exurban and rural areas along interstates, especially in the South. In an era of intense global competition, these have now become the places where voters can be most easily mobilized in favor of trade protection.

Moreover, much like manufacturing in an earlier era, the knowledge economy has grown up in a very geographically concentrated way in certain city centers. These are the places that now benefit most from globalization and free trade. We’re back to debates about trade and protection that occupied Alexander Hamilton and Thomas Jefferson, although the geographic location of the interests has changed over time. Changing economic geography has shaped our political geography in important ways, and contributed to an increase in urban-rural polarization.

JONATHAN RODDEN is a professor in the Political Science Department at Stanford and a Senior Fellow at the Hoover Institution. Jonathan Rodden's Edge Bio Page

Childhood's End

The digital revolution isn’t over but has turned into something else George Dyson [1.1.19]

Nations, alliances of nations, and national institutions are in decline, while a state perhaps best described as Oligarchia is on the ascent. George Dyson explains in this, the first Edge New Year's Essay.

GEORGE DYSON is the author of Turing’s Cathedral and Darwin Among the Machines. George Dyson's Edge Bio Page

"To ring in the New Year in the most depressing and hope-crushing way possible, Dyson sat down with Edge.org” — Brett Tingley, Mysterious Universe


[Click for media coverage of "Childhood's End"]

Childhood's End

 
All revolutions come to an end, whether they succeed or fail.

The digital revolution began when stored-program computers broke the distinction between numbers that mean things and numbers that do things. Numbers that do things now rule the world. But who rules over the machines?

Once it was simple: programmers wrote the instructions that were supplied to the machines. Since the machines were controlled by these instructions, those who wrote the instructions controlled the machines.

All revolutions come to an end, whether they succeed or fail.

Two things then happened. As computers proliferated, the humans providing instructions could no longer keep up with the insatiable appetite of the machines. Codes became self-replicating, and machines began supplying instructions to other machines. Vast fortunes were made by those who had a hand in this. A small number of people and companies who helped spawn self-replicating codes became some of the richest and most powerful individuals and organizations in the world.

Then something changed. There is now more code than ever, but it is increasingly difficult to find anyone who has their hands on the wheel. Individual agency is on the wane. Most of us, most of the time, are following instructions delivered to us by computers rather than the other way around. The digital revolution has come full circle and the next revolution, an analog revolution, has begun. None dare speak its name.

Childhood’s End was Arthur C. Clarke’s masterpiece, published in 1953, chronicling the arrival of benevolent Overlords who bring many of the same conveniences now delivered by the Keepers of the Internet to Earth. It does not end well.

Pages

Subscribe to Front page feed