Edge Video Library

Sexual Double Standards

The Bias Against Understanding the Biological Foundations of Women's Behavior
Martie Haselton
[5.24.18]

We don’t know enough about important issues that impact women. We don’t know enough about potential side effects of using hormonal contraception. There’s a lot of speculation about it, but most of that speculation is problematic. If you eliminate women’s hormone cycles, what are the implications? That’s an important question. We still don’t know enough about hormone supplements for women later in life. We don’t even know enough about fertility. The data are also problematic. The data on fertility in women’s third, fourth, fifth decades of life are based on ancient records, 200 years old. The statistics that doctors will cite when they are telling women whether they need to see a fertility specialist or not are from a period before modern medicine was really in place, which is outrageous. More recognition of the biological influences on women’s behavior is going to awaken these areas of research, and that will have a positive impact.

MARTIE HASELTON is a professor of psychology and communication studies at the Institute for Society and Genetics and UCLA. She is the author of Hormonal: The Hidden Intelligence of Hormones—How They Drive Desire, Shape Relationships, Influence Our Choices, and Make Us WiserMartie Haselton's Edge Bio

 


Go to stand-alone video: :
 

The Space of Possible Minds

Murray Shanahan
[5.18.18]

Aaron Sloman, the British philosopher, has this great phrase: the space of possible minds. The idea is that the space of possible minds encompasses not only the biological minds that have arisen on this earth, but also extraterrestrial intelligence, and whatever forms of biological or evolved intelligence are possible but have never occurred, and artificial intelligence in the whole range of possible ways we might build AI.

I love this idea of the space of possible minds, trying to understand the structure of the space of possible minds in some kind of principled way. How is consciousness distributed through this space of possible minds? Is something that has a sufficiently high level of intelligence necessarily conscious? Is consciousness a prerequisite for human-level intelligence or general intelligence? I tend to think the answer to that is no, but it needs to be fleshed out a little bit. We need to break down the concept of consciousness into different aspects, all of which tend to occur together in humans, but can occur independently, or some subset of these can occur on its own in an artificial intelligence. Maybe we can build an AI that clearly has an awareness and understanding of the world. We very much want to say, "It's conscious of its surroundings, but it doesn't experience any emotion and is not capable of suffering." We can imagine building something that has some aspects of consciousness and lacks others.

MURRAY SHANAHAN is a professor of cognitive robotics at Imperial College London and a senior research scientist at DeepMind. Murray Shanahan's Edge Bio Page


Go to stand-alone video: :
 

Looking in the Wrong Places

Sabine Hossenfelder
[4.30.18]

We should be very careful in thinking about whether we’re working on the right problems. If we don’t, that ties into the problem that we don’t have experimental evidence that could move us forward. We're trying to develop theories that we use to find out which are good experiments to make, and these are the experiments that we build.  

We build particle detectors and try to find dark matter; we build larger colliders in the hope of producing new particles; we shoot satellites into orbit and try to look back into the early universe, and we do that because we hope there’s something new to find there. We think there is because we have some idea from the theories that we’ve been working on that this would be something good to probe.

If we are working with the wrong theories, we are making the wrong extrapolations, we have the wrong expectations, we make the wrong experiments, and then we don’t get any new data. We have no guidance to develop these theories. So, it’s a chicken and egg problem. We have to break the cycle. I don’t have a miracle cure to these problems. These are hard problems. It’s not clear what a good theory is to develop. I’m not any wiser than all the other 20,000 people in the field.

SABINE HOSSENFELDER is a research fellow at the Frankfurt Institute for Advanced Studies, an independent, multidisciplinary think tank dedicated to theoretical physics and adjacent fields. She is also a singer-songwriter whose music videos appear on her website sabinehossenfelder.com (see video below). Sabine Hossenfelder's Edge Bio Page


Go to stand-alone video: :
 

HOW TO BE A SYSTEMS THINKER

Mary Catherine Bateson
[4.17.18]

Until fairly recently, the artificial intelligence didn’t learn. To create a machine that learns to think more efficiently was a big challenge. In the same sense, one of the things that I wonder is how we'll be able to teach a machine to know what it doesn’t know and that it might need to know in order to address a particular issue productively and insightfully. This is a huge problem for human beings. It takes a while for us to learn to solve problems. And then it takes even longer for us to realize what we don’t know that we would need to know to solve a particular problem, which obviously involves a lot of complexity.  

How do you deal with ignorance? I don’t mean how do you shut ignorance out. Rather, how do you deal with an awareness of what you don’t know, and you don’t know how to know, in dealing with a particular problem? When Gregory Bateson was arguing about human purposes, that was where he got involved in environmentalism. We were doing all sorts of things to the planet we live on without recognizing what the side effects would be and the interactions. Although, at that point we were thinking more about side effects than about interactions between multiple processes. Once you begin to understand the nature of side effects, you ask a different set of questions before you make decisions and projections and analyze what’s going to happen.

MARY CATHERINE BATESON is a writer and cultural anthropologist. In 2004 she retired from her position as Clarence J. Robinson Professor in Anthropology and English at George Mason University, and is now Professor Emerita. Mary Catherine Bateson's Edge Bio


Go to stand-alone video: :
 

We Are Here To Create

Kai-Fu Lee
[3.26.18]

My original dream of finding who we are and why we exist ended up in a failure. Even though we invented all these wonderful tools that will be great for our future, for our kids, for our society, we have not figured out why humans exist. What is interesting for me is that in understanding that these AI tools are doing repetitive tasks, it certainly comes back to tell us that doing repetitive tasks can’t be what makes us humans. The arrival of AI will at least remove what cannot be our reason for existence on this earth. If that’s half of our job tasks, then that’s half of our time back to thinking about why we exist. One very valid reason for existing is that we are here to create. What AI cannot do is perhaps a potential reason for why we exist. One such direction is that we create. We invent things. We celebrate creation. We’re very creative about scientific process, about curing diseases, about writing books, writing movies, creative about telling stories, doing a brilliant job in marketing. This is our creativity that we should celebrate, and that’s perhaps what makes us human.

KAI-FU LEE, the founder of the Beijing-based Sinovation Ventures, is ranked #1 in technology in China by Forbes. Educated as a computer scientist at Columbia and Carnegie Mellon, his distinguished career includes working as a research scientist at Apple; Vice President of the Web Products Division at Silicon Graphics; Corporate Vice President at Microsoft and founder of Microsoft Research Asia in Beijing, one of the world’s top research labs; and then Google Corporate President and President of Google Greater China. As an internet celebrity, he has fifty million+ followers on the Chinese micro-blogging website WeiboAs an author, among his seven bestsellers in the Chinese language, two have sold more than one million copies each. His first book in English is AI Superpowers: China, Silicon Valley, and the New World Order (forthcoming, September). Kai-Fu Lee's Edge Bio page 


Go to stand-alone video: :
 

A Common Sense

Caroline A. Jones
[3.15.18]

We need to acknowledge our profound ignorance and begin to craft a culture that will be based on some notion of communalism and interspecies symbiosis rather than survival of the fittest. These concepts are available and fully elaborated by, say, a biologist like Lynn Margulis, but they're still not the central paradigm. They’re still not organizing our research or driving our culture and our cultural evolution. That’s what I’m frustrated with. There’s so much good intellectual work, so much good philosophy, so much good biology—how can we make that more central to what we do? 

CAROLINE A. JONES is professor of art history in the History, Theory, Criticism section of the Department of Architecture at MIT. Caroline A. Jones's Edge Bio page 


Go to stand-alone video: :
 

Church Speaks

George Church
[2.14.18]

The biggest energy creators in the world, the ones that take solar energy and turn it into a form that’s useful to humans, are these photosynthetic organisms. The cyanobacteria fix [carbon via] light as well or better than land plants. Under ideal circumstances, they can be maybe seven to ten times more productive per photon. . . .

Cyanobacteria turn carbon dioxide, a global warming gas, into carbohydrates and other carbon-containing polymers, which sequester the carbon so that they're no longer global warming gases. They turn it into their own bodies. They do this on such a big scale that about 15 percent of the carbon dioxide in the atmosphere is fixed every year by these cyanobacteria, which is roughly the amount that we’re off from the pre-industrial era. If all of the material that they fix didn’t turn back into carbon dioxide, we’d have solved the global warming problem in a year or two. The reality, however, is that almost as soon as they divide and make baby bacteria, phages break them open, spilling their guts, and they start turning into carbon dioxide. Then all the other things around them start chomping on the bits left over from the phages.

GEORGE CHURCH is professor of genetics at Harvard Medical School, director of the Personal Genome Project, and co-author (with Ed Regis) of Regenesis. George Church's Edge Bio page 


Go to stand-alone video: :
 

The State of Informed Bewilderment

John Naughton
[1.3.18]

In relation to the Internet and the changes it has already brought in our society, my feeling is that although we don’t know really where it’s heading because it’s too early in the change, we’ve had one stroke of luck. The stroke of luck was that, as a species, we’ve conducted this experiment once before. We’re living through a transformation of our information environment. This happened once before, and we know quite a lot about it. It was kicked off in 1455 by Johannes Gutenberg and his invention of printing by movable type.

In the centuries that followed, that invention not only transformed humanity’s information environment, it also led to colossal changes in society and the world. You could say that what Gutenberg kicked off was a world in which we were all born. Even now, it’s the world in which most of us were shaped. That’s changing for younger generations, but that’s the case for people like me.

JOHN NAUGHTON is a senior research fellow at Cambridge University's Centre for Research in the Arts, Social Sciences and Humanities. He is an Internet columnist for the London Observer, and author of From Gutenberg to Zuckerberg. John Naughton's Edge Bio page 


Go to stand-alone video: :
 

"A Difference That Makes a Difference"

Daniel C. Dennett
[11.22.17]

Having turned my back on propositions, I thought, what am I going to do about this? The area where it really comes up is when you start looking at the contents of consciousness, which is my number one topic. I like to quote Maynard Keynes on this. He was once asked, “Do you think in words or pictures?” to which he responded, “I think in thoughts.” It was a wonderful answer, but also wonderfully uninformative. What the hell’s a thought then? How does it carry information? Is it like a picture? Is it iconic in some way? Does it resemble what it’s about, or is it like a word that refers to what it’s about without resembling it? Are there third, fourth, fifth alternatives? Looking at information in the brain and then trying to trace it back to information in the genes that must be responsible for providing the design of the brain that can then carry information in other senses, you gradually begin to realize that this does tie in with Shannon-Weaver information theory. There’s a way of seeing information as "a difference that makes a difference," to quote Donald MacKay and Bateson.

Ever since then, I’ve been trying to articulate, with the help of Harvard evolutionary biologist David Haig, just what meaning is, what content is, and ultimately, in terms of biological information and physical information, the information of Shannon and Weaver. There’s a chapter in my latest book called “What is Information?” I stand by it, but it’s under revision. I’m already moving beyond it and realizing there’s a better way of tackling some of these issues.

DANIEL C. DENNETT is the Austin B. Fletcher Professor of Philosophy and co-director of the Center for Cognitive Studies at Tufts University. He is the author, most recently, of From Bacteria to Bach and Back: The Evolution of Minds. Daniel C. Dennett's Edge Bio page

 


Go to stand-alone video: :
 

The Human Strategy

Alex "Sandy" Pentland
[10.30.17]

The idea of a credit assignment function, reinforcing “neurons” that work, is the core of current AI. And if you make those little neurons that get reinforced smarter, the AI gets smarter. So, what would happen if the neurons were people? People have lots of capabilities; they know lots of things about the world; they can perceive things in a human way. What would happen if you had a network of people where you could reinforce the ones that were helping and maybe discourage the ones that weren't?

That begins to sound like a society or a company. We all live in a human social network. We're reinforced for things that seem to help everybody and discouraged from things that are not appreciated. Culture is something that comes from a sort of human AI, the function of reinforcing the good and penalizing the bad, but applied to humans and human problems. Once you realize that you can take this general framework of AI and create a human AI, the question becomes, what's the right way to do that? Is it a safe idea? Is it completely crazy?

ALEX "SANDY" PENTLAND is a professor at MIT, and director of the MIT Connection Science and Human Dynamics labs. He is a founding member of advisory boards for Google, AT&T, Nissan, and the UN Secretary General. He is the author of Social Physics, and Honest Signal. Sandy Pentland's Edge Bio page


Go to stand-alone video: :
 

Pages