Edge Video Library

How to Create an Institution That Lasts 10,000 Years

[4.24.19]

We’re also looking at the oldest living companies in the world, most of which are service-based. There are some family-run hotels and things like that, but also a huge amount in the food and beverage industry. Probably a third of the organizations or the companies over 500 or 1,000 years old are all in some way in wine, beer, or sake production. I was intrigued by that crossover.

What’s interesting is that humanity figured out how to ferment things about 10,000 years ago, which is exactly the time frame where people started creating cities and agriculture. It’s unclear if civilization started because we could ferment things, or we started fermenting things and therefore civilization started, but there’s clearly this intertwined link with fermenting beer, wine, and then much later spirits, and how that fits in with hospitality and places that people gather.

All of these things are right now just nascent bits and pieces of trying to figure out some of the ways in which organizations live for a very long time. While some of them, like being a family-run hotel, may not be very portable as an idea, some of them, like some of the natural strategies, we're just starting to understand how they can be of service to humanity. If we broaden the idea of service industry to our customer civilization, how can you make an institution whose customer is civilization and can last for a very long time?

ALEXANDER ROSE is the executive director of The Long Now Foundation, manager of the 10,000 Year Clock Project, and curator of the speaking series' at The Interval and The Battery SF. Alexander Rose's Edge Bio Page


 

Machines Like Me

[4.16.19]

I would like to set aside the technological constraints in order to imagine how an embodied artificial consciousness might negotiate the open system of human ethics—not how people think they should behave, but how they do behave. For example, we may think the rule of law is preferable to revenge, but matters get blurred when the cause is just and we love the one who exacts the revenge. A machine incorporating the best angel of our nature might think otherwise. The ancient dream of a plausible artificial human might be scientifically useless but culturally irresistible. At the very least, the quest so far has taught us just how complex we (and all creatures) are in our simplest actions and modes of being. There’s a semi-religious quality to the hope of creating a being less cognitively flawed than we are.

IAN MCEWAN is a novelist whose works have earned him worldwide critical acclaim. He is the recipient of the Man Booker Prize for Amsterdam (1998), the National Book Critics' Circle Fiction Award, and the Los Angeles Times Prize for Fiction for Atonement (2003). His most recent novel is Machines Like Me. Ian McEwan's Edge Bio Page


Go to stand-alone video: :
 

Is Superintelligence Impossible?

On Possible Minds: Philosophy and AI with Daniel C. Dennett and David Chalmers
[4.10.19]

[ED. NOTE: On Saturday, March 9th, more than 1200 people jammed into Pioneer Works in Red Hook, Brooklyn, for a conversation between two of our greatest philosophers, David Chalmers and Daniel C. Dennett, who ask each other, "Is Superintlligence Impossible?" As part of the Edgeongoing "Possible Minds Project," we are pleased to present the video, audio, and transcript of the event, which was orchestrated by the noted physicist, artist, author (and fellow Edgie), and Director of Sciences at Pioneer Works, Janna Levin, with the support of Science Sandbox, a Simons Foundation initiative dedicated to engaging everyone with the process of science. —JB]

Somebody said that the philosopher is the one who says, "We know it’s possible in practice, we’re trying to figure out if it’s possible in principle." Unfortunately, philosophers sometimes spend too much time worrying about logical possibilities that are importantly negligible in every other regard. So, let me go on the record as saying, yes, I think that conscious AI is possible because, after all, what are we? We’re conscious. We’re robots made of robots made of robots. We’re actual. In principle, you could make us out of other materials. Some of your best friends in the future could be robots. Possible in principle, absolutely no secret ingredients, but we’re not going to see it. We’re not going to see it for various reasons. One is, if you want a conscious agent, we’ve got plenty of them around and they’re quite wonderful, whereas the ones that we would make would be not so wonderful. —Daniel C. Dennett

One of our questions here is, is superintelligence possible or impossible? I’m on the side of possible. I like the possible, which is one reason I like John’s theme, "Possible Minds." That’s a wonderful theme for thinking about intelligence, both natural and artificial, and consciousness, both natural and artificial. … The space of possible minds is absolutely vast—all the minds there ever have been, will be, or could be, starting with the actual minds. There are a lot of actual minds. I guess there have been a hundred billion or so humans with minds of their own. Some pretty amazing minds have been Confucius, Isaac Newton, Jane Austen, Pablo Picasso, Martin Luther King, on it goes, a lot of amazing minds. But still, those hundred billion minds put together are just the tiniest corner of this space of possible minds. —David Chalmers

__

David Chalmers is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is best known for his work on consciousness, including his formulation of the “hard problem” of consciousness; Daniel C. Dennett is University Professor and Austin B. Fletcher Professor of Philosophy and director of the Center for Cognitive Studies at Tufts University. He is the author of a dozen books, including Consciousness Explained, and, most recently, From Bacteria to Bach and Back: The Evolution of Minds;  John Brockman, moderator, is a cultural impresario whose career has encompassed the avant-garde art world, science, books, software, and the Internet. He is the author of By The Late John Brockman and The Third Culture; editor of the Edge Annual Question book series, and Possible Minds: 25 Ways of Looking at AI.


Go to stand-alone video: :
 

Cultural Intelligence

[3.12.19]

Getting back to culture being invisible and omnipresent, we think about intelligence or emotional intelligence, but we rarely think about cultivating cultural intelligence. In this ever-increasing global world, we need to understand culture. All of this research has been trying to elucidate not just how we understand other people who are different from us, but how we understand ourselves.

MICHELE GELFAND is a Distinguished University Professor at the University of Maryland, College Park. She is the author of Rule Makers, Rule Breakers: How Tight and Loose Cultures Wire the WorldMichele Gelfand's Edge Bio Page

 


Go to stand-alone video: :
 

Alzheimer's Prevention

[2.11.19]

Right now, we don’t have therapies that regrow neurons. Alzheimer’s is a disease that kills your neurons over time, so once they’re gone they’re pretty much gone. There are things that one can do pharmaceutically to ameliorate the symptoms. For example, there are FDA-approved drugs such as acetylcholinesterase inhibitors or memantine, which do lessen or stabilize symptoms for a few years, but they can’t stop disease progression. What we’re interested in is disease modification, stopping it before it’s too severe or too advanced.

At the Alzheimer’s Prevention Clinic, we try to tell people what to do in a preventative way. There are a lot of other people and clinicians that are actively engaging in prevention as well. It’s new in my field, especially in the field of neurology. Until four years ago nobody would dare use the word “prevention” out loud because so many doctors and clinicians would just label you as a quack right away and you would lose credibility overnight. I find scientists are much more open to this now.

LISA MOSCONI is the director of the Women's Brain Initiative and the associate director of the Alzheimer's Prevention Clinic at Weill Cornell Medical College. She is the author of Brain Food: The Surprising Science of Eating for Cognitive PowerLisa Mosconi's Edge Bio Page

 


Go to stand-alone video: :
 

The Future of the Mind

How AI Technology Could Reshape the Human Mind and Create Alternate Synthetic Minds
[1.28.19]

I see many misunderstandings in current discussions about the nature of mind, such as the assumption that if we create sophisticated AI, it will inevitably be conscious. There is also this idea that we should “merge with AI”—that in order for humans to keep up with developments in AI and not succumb to hostile superintelligent AIs or AI-based technological unemployment, we need to enhance our own brains with AI technology.

One thing that worries me about all this is that don't think AI companies should be settling issues involving the shape of the mind. The future of the mind should be a cultural decision and an individual decision. Many of the issues at stake here involve classic philosophical problems that have no easy solutions. I’m thinking, for example, of theories of the nature of the person in the field of metaphysics. Suppose that you add a microchip to enhance your working memory, and then years later you add another microchip to integrate yourself with the Internet, and you just keep adding enhancement after enhancement. At what point will you even be you? When you think about enhancing the brain, the idea is to improve your life—to make you smarter, or happier, maybe even to live longer, or have a sharper brain as you grow older—but what if those enhancements change us in such drastic ways that we’re no longer the same person?

SUSAN SCHNEIDER holds the Distinguished Scholar chair at the Library of Congress and is the director of the AI, Mind and Society (“AIMS”) Group at the University of Connecticut. Susan Schneider's Edge Bio Page


Go to stand-alone video: :
 

The Urban-Rural Divide

Why Geography Matters
[1.16.19]

When I describe an increasing correlation between density and Democratic voting that took off after the 1980s, this is the rise not only of globalization and the knowledge economy in that period, but also the rise of politics related to religion, gender, and the social transformations that came about in the ‘60s and ‘70s and then were politicized in the ‘80s. Before the 1980s, it was not clear if one was a social conservative and one was anti-abortion whether one should be a Democrat or a Republican. That became much more clear in the 1980s when the parties took very sharply different positions on those issues. One’s preferences on those issues are also highly correlated with population density.

Once we add this additional set of issues, it all starts to bunch together. The parties become increasingly separated in their geographies. The Democrats go from not only being a party of urban workers, but also being a party of urban social progressives, which leads to further sorting of individuals into the parties. Knowing someone’s preferences and whether they call themselves a liberal or a conservative becomes much more predictive of whether they vote for Democrats or Republicans.

There's a real geographic story to that as well. These people who are sorting into the parties in this period are geographically located in ways that are quite clear. It all leads to an increase in this correlation between population density and Democratic voting. All that comes together and we end up with these two parties that offer a set of policies that might not even make that much sense anymore to refer to them as left and right. It makes more sense to refer to them as urban and rural because of the way they’re packaged together.

JONATHAN RODDEN is a professor in the Political Science Department at Stanford and a Senior Fellow at the Hoover Institution. Jonathan Rodden's Edge Bio Page


Go to stand-alone video: :
 

The Social History of Religion

[12.12.18]

It’s twenty-five years later from the time that I started working on this, and we understand something quite different about the Gospel of Thomas. What it looks like more than anything else, when you put it in context with other historical material, is Jewish mystical thought, or, Kabbalah. Kabbalah, we thought, was first known from written texts from the 10th to the 15th centuries from Spanish-Jewish communities. Before that, there was a prohibition on writing about secret teaching. It was mystical teaching that you were not supposed to write about because you don't know what fool could get ahold of it if you did. So, there was a prohibition on teaching anyone mystical Judaism before he was thirty-five, and certainly not to women. People were old by thirty-five, so you had to be a mature Jewish man to have access to that kind of teaching.

I, and others who study Jewish mystical thought at the Hebrew University in Jerusalem, suspect that this tradition goes back 2,000 years. This text says it’s Jesus’ secret teaching. Could it be? It could be. I don't know if it is or not, but it’s fascinating to see that what rabbis called “mystical thought” was labeled by Christian bishops in the 4th century to be heresy. That’s when I realized how religious imagination and politics coincide, because of the politics in the 4th century when Christian bishops were beginning to ask who this Jesus of Nazareth was. Jesus was God in human form, and he’s the only one who is the Son of God in human form. So, you can create a monopoly on divine energy and power with a religion that has the only access to the only person in the universe who ever channeled God directly, or was God and became human. That works very well for Orthodox Christianity. . . .

These discoveries are changing the way we understand how cultural traditions were shaped and how they became part of the culture in very different forms than they had begun. I find that enormously exciting. They involve everything from attitudes about gender and sexuality to attitudes about power and politics, about race, and gender, and ethnicity. That’s why I began to write about Adam and Eve. I mean, who cares about Adam and Eve? You realize that those traditions still play out in the culture—in the laws of the United States, or the laws of Britain, or the laws in Africa, the laws against homosexuality, and the ones that claim that the only true marriage can be a marriage between a man and a woman for the purpose of procreation. The Defense of Marriage Act was written by Professor Robert George at Princeton for G.W. Bush. These things still resonate, often very unconsciously, in the culture.

ELAINE PAGELS is the Harrington Spear Paine Professor of Religion at Princeton University. She is the author, most recently, of Why Religion?: A Personal Story. Elaine Pagels' Edge Bio Page

 


Go to stand-alone video: :
 

How Technology Changes Our Concept of the Self

[11.20.18]

The general project that I’m working on is about the self and technology—what we understand by the self and how it’s changed over time. My sense is that the self is not a universal and purely abstract thing that you’re going to get at through a philosophy of principles. Here’s an example: Sigmund Freud considered his notion of psychic censorship (of painful or forbidden thoughts) to be one of his greatest contributions to his account of who we are. His thoughts about these ideas came early, using as a model the specific techniques that Czarist border guards used to censor the importation of potentially dangerous texts into Russia. Later, Freud began to think of the censoring system in Vienna during World War I—techniques applied to every letter, postcard, telegram and newspaper—as a way of getting at what the mind does. Another example: Cyberneticians came to a different notion of self, accessible from the outside, identified with feedback systems—an account of the self that emerged from Norbert Wiener’s engineering work on weapons systems during World War II. Now I see a new notion of the self emerging; we start by modeling artificial intelligence on a conception of who we are, and then begin seeing ourselves ever more in our encounter with AI.

PETER GALISON is the Joseph Pellegrino University Professor of the History of Science and of Physics at Harvard University and Director of the Collection of Historical Scientific Instruments. Peter Galison's Edge Bio Page

 


 

When the Rule of Law Is Not Working

[10.11.18]

Corruption in general has a deleterious effect on the readiness of economic agents to invest. In the long run, it leads to a paralysis of economic life. But very often it is not that economic agents themselves have had the bad experience of being cheated and ruined, they just know that in this country, or in this part of the economy, or this building scene, there is a high likelihood that you will get cheated and that free riders can get away with it. Here again, reputation is absolutely essential, which is why transparency is so important. Trust can only be engendered by transparency. It's no coincidence that the name of the most influential non-governmental organization dealing with corruption is Transparency International.

KARL SIGMUND is professor of mathematics at the University of Vienna and one of the pioneers of evolutionary game theory. He is the author of Exact Thinking in Demented Times: The Vienna Circle and the Epic Quest for the Foundations of Science. Karl Sigmund's Edge Bio Page


Go to stand-alone video: :
 

Pages