All Videos

Collective Awareness

[7.17.18]

Economic failures cause us serious problems. We need to build simulations of the economy at a much more fine-grained level that take advantage of all the data that computer technologies and the Internet provide us with. We need new technologies of economic prediction that take advantage of the tools we have in the 21st century.  

Places like the US Federal Reserve Bank make predictions using a system that has been developed over the last eighty years or so. This line of effort goes back to the middle of the 20th century, when people realized that we needed to keep track of the economy. They began to gather data and set up a procedure for having firms fill out surveys, for having the census take data, for collecting a lot of data on economic activity and processing that data. This system is called “national accounting,” and it produces numbers like GDP, unemployment, and so on. The numbers arrive at a very slow timescale. Some of the numbers come out once a quarter, some of the numbers come out once a year. The numbers are typically lagged because it takes a lot of time to process the data, and the numbers are often revised as much as a year or two later. That system has been built to work in tandem with the models that have been built, which also process very aggregated, high-level summaries of what the economy is doing. The data is old fashioned and the models are old fashioned.

It's a 20th-century technology that's been refined in the 21st century. It's very useful, and it represents a high level of achievement, but it is now outdated. The Internet and computers have changed things. With the Internet, we can gather rich, detailed data about what the economy is doing at the level of individuals. We don't have to rely on surveys; we can just grab the data. Furthermore, with modern computer technology we could simulate what 300 million agents are doing, simulate the economy at the level of the individuals. We can simulate what every company is doing and what every bank is doing in the United States. The model we could build could be much, much better than what we have now. This is an achievable goal.

But we're not doing that, nothing close to that. We could achieve what I just said with a technological system that’s simpler than Google search. But we’re not doing that. We need to do it. We need to start creating a new technology for economic prediction that runs side-by-side with the old one, that makes its predictions in a very different way. This could give us a lot more guidance about where we're going and help keep the economic shit from hitting the fan as often as it does.

J. DOYNE FARMER is director of the Complexity Economics programme at the Institute for New Economic Thinking at the Oxford Martin School, professor in the Mathematical Institute at the University of Oxford, and an external professor at the Santa Fe Institute. He was a co-founder of Prediction Company, a quantitative automated trading firm that was sold to the United Bank of Switzerland in 2006. J. Doyne Farmer's Edge Bio Page


Go to stand-alone video: :
 

Absolute Brain Size Matters

[6.28.18]

The thing that stuck out was that self-control is simply a product of absolute brain size. It had more to do with your feeding ecology: How complex was your diet? How many things do you rely on to survive? That was a big surprise, because the idea that diet is shaping cognition has faded in many circles as the leading hypothesis for thinking about how psychology evolves. So, how do we move forward on testing ideas about the evolution of psychology? ... It's interesting to think about how this all came about. It all started in a bar.

BRIAN HARE is an associate professor of evolutionary anthropology at Duke University in North Carolina and founder the Duke Canine Cognition Center. He is the co-author (with Vanessa Woods) of The Genius of Dogs: How Dogs Are Smarter Than You Think. Brian Hare's Edge Bio Page


Go to stand-alone video: :
 

The Connectomic Revolution

What the Insect Brain Can Tell Us About Ourselves
[6.12.18]

An even more recent and exciting revolution happening now is this connectomic revolution, where we’re able to map in exquisite detail the connections of a part of the brain, and soon even an entire insect brain. It’s giving us absolute answers to questions that we would have debated even just a few years ago; for example, does the insect brain work as an integrated system? And because we now have a draft of a connectome for the full insect brain, we can absolutely answer that question. That completely changes not just the questions that we’re asking, but our capacity to answer questions. There’s a whole new generation of questions that become accessible.

When I say a connectome, what I mean is an absolute map of the neural connections in a brain. That’s not a trivial problem. It's okay at one level to, for example with a light microscope, get a sense of the structure of neurons, to reconstruct some neurons and see where they go, but knowing which neurons connect with other neurons requires another level of detail. You need electron microscopy to look at the synapses.

ANDREW BARRON is the Australian Research Council Future Fellow and Deputy Head of the Department of Biological Sciences at Macquarie University. He is a neuroethologist with a particular focus on studying the neural mechanisms of honey bees. Andrew Barron's Edge Bio Page


Go to stand-alone video: :
 

Bonding with Your Algorithm

[6.5.18]

The relationship between parents and children is the most important relationship. It gets more complicated in this case because, beyond the children being our natural children, we can influence them even beyond. We can influence them biologically, and we can use artificial intelligence as a new tool. I’m not a scientist or a technologist whatsoever, but the tools of artificial intelligence, in theory, are algorithm- or computer-based. In reality, I would argue that even an algorithm is biological because it comes from somewhere. It doesn’t come from itself. If it’s related to us as creators or as the ones who are, let’s say, enabling the algorithms, well, we’re the parents.

Who are those children that we are creating? What do we want them to be like as part of the earth, compared to us as a species and, frankly, compared to us as parents? They are our children. We are the parents. How will they treat us as parents? How do we treat our own parents? How do we treat our children? We have to think of these in the exact same way. Separating technology and humans the way we often think about these issues is almost wrong. If it comes from us, it’s the same thing. We have a responsibility. We have the power and the imagination to shape this future generation. It’s exciting, but let’s just make sure that they view us as their parents. If they view us as their parents, we will have a connection.

Investor and philanthropist NICOLAS BERGGRUEN is the chairman of the Berggruen Institute, and founder of the 21st Century Council, the Council for the Future of Europe, and the Think Long Committee for California. Nicolas Berggruen's Edge Bio Page


Go to stand-alone video: :
 

Sexual Double Standards

The Bias Against Understanding the Biological Foundations of Women's Behavior
[5.24.18]

We don’t know enough about important issues that impact women. We don’t know enough about potential side effects of using hormonal contraception. There’s a lot of speculation about it, but most of that speculation is problematic. If you eliminate women’s hormone cycles, what are the implications? That’s an important question. We still don’t know enough about hormone supplements for women later in life. We don’t even know enough about fertility. The data are also problematic. The data on fertility in women’s third, fourth, fifth decades of life are based on ancient records, 200 years old. The statistics that doctors will cite when they are telling women whether they need to see a fertility specialist or not are from a period before modern medicine was really in place, which is outrageous. More recognition of the biological influences on women’s behavior is going to awaken these areas of research, and that will have a positive impact.

MARTIE HASELTON is a professor of psychology and communication studies at the Institute for Society and Genetics and UCLA. She is the author of Hormonal: The Hidden Intelligence of Hormones—How They Drive Desire, Shape Relationships, Influence Our Choices, and Make Us WiserMartie Haselton's Edge Bio

 


Go to stand-alone video: :
 

The Space of Possible Minds

[5.18.18]

Aaron Sloman, the British philosopher, has this great phrase: the space of possible minds. The idea is that the space of possible minds encompasses not only the biological minds that have arisen on this earth, but also extraterrestrial intelligence, and whatever forms of biological or evolved intelligence are possible but have never occurred, and artificial intelligence in the whole range of possible ways we might build AI.

I love this idea of the space of possible minds, trying to understand the structure of the space of possible minds in some kind of principled way. How is consciousness distributed through this space of possible minds? Is something that has a sufficiently high level of intelligence necessarily conscious? Is consciousness a prerequisite for human-level intelligence or general intelligence? I tend to think the answer to that is no, but it needs to be fleshed out a little bit. We need to break down the concept of consciousness into different aspects, all of which tend to occur together in humans, but can occur independently, or some subset of these can occur on its own in an artificial intelligence. Maybe we can build an AI that clearly has an awareness and understanding of the world. We very much want to say, "It's conscious of its surroundings, but it doesn't experience any emotion and is not capable of suffering." We can imagine building something that has some aspects of consciousness and lacks others.

MURRAY SHANAHAN is a professor of cognitive robotics at Imperial College London and a senior research scientist at DeepMind. Murray Shanahan's Edge Bio Page


Go to stand-alone video: :
 

Looking in the Wrong Places

[4.30.18]

We should be very careful in thinking about whether we’re working on the right problems. If we don’t, that ties into the problem that we don’t have experimental evidence that could move us forward. We're trying to develop theories that we use to find out which are good experiments to make, and these are the experiments that we build.  

We build particle detectors and try to find dark matter; we build larger colliders in the hope of producing new particles; we shoot satellites into orbit and try to look back into the early universe, and we do that because we hope there’s something new to find there. We think there is because we have some idea from the theories that we’ve been working on that this would be something good to probe.

If we are working with the wrong theories, we are making the wrong extrapolations, we have the wrong expectations, we make the wrong experiments, and then we don’t get any new data. We have no guidance to develop these theories. So, it’s a chicken and egg problem. We have to break the cycle. I don’t have a miracle cure to these problems. These are hard problems. It’s not clear what a good theory is to develop. I’m not any wiser than all the other 20,000 people in the field.

SABINE HOSSENFELDER is a research fellow at the Frankfurt Institute for Advanced Studies, an independent, multidisciplinary think tank dedicated to theoretical physics and adjacent fields. She is also a singer-songwriter whose music videos appear on her website sabinehossenfelder.com (see video below). Sabine Hossenfelder's Edge Bio Page


Go to stand-alone video: :
 

How To Be a Systems Thinker

[4.17.18]

Until fairly recently, the artificial intelligence didn’t learn. To create a machine that learns to think more efficiently was a big challenge. In the same sense, one of the things that I wonder is how we'll be able to teach a machine to know what it doesn’t know and that it might need to know in order to address a particular issue productively and insightfully. This is a huge problem for human beings. It takes a while for us to learn to solve problems. And then it takes even longer for us to realize what we don’t know that we would need to know to solve a particular problem, which obviously involves a lot of complexity.  

How do you deal with ignorance? I don’t mean how do you shut ignorance out. Rather, how do you deal with an awareness of what you don’t know, and you don’t know how to know, in dealing with a particular problem? When Gregory Bateson was arguing about human purposes, that was where he got involved in environmentalism. We were doing all sorts of things to the planet we live on without recognizing what the side effects would be and the interactions. Although, at that point we were thinking more about side effects than about interactions between multiple processes. Once you begin to understand the nature of side effects, you ask a different set of questions before you make decisions and projections and analyze what’s going to happen.

MARY CATHERINE BATESON is a writer and cultural anthropologist. In 2004 she retired from her position as Clarence J. Robinson Professor in Anthropology and English at George Mason University, and is now Professor Emerita. Mary Catherine Bateson's Edge Bio


Go to stand-alone video: :
 

We Are Here To Create

[3.26.18]

My original dream of finding who we are and why we exist ended up in a failure. Even though we invented all these wonderful tools that will be great for our future, for our kids, for our society, we have not figured out why humans exist. What is interesting for me is that in understanding that these AI tools are doing repetitive tasks, it certainly comes back to tell us that doing repetitive tasks can’t be what makes us humans. The arrival of AI will at least remove what cannot be our reason for existence on this earth. If that’s half of our job tasks, then that’s half of our time back to thinking about why we exist. One very valid reason for existing is that we are here to create. What AI cannot do is perhaps a potential reason for why we exist. One such direction is that we create. We invent things. We celebrate creation. We’re very creative about scientific process, about curing diseases, about writing books, writing movies, creative about telling stories, doing a brilliant job in marketing. This is our creativity that we should celebrate, and that’s perhaps what makes us human.

KAI-FU LEE, the founder of the Beijing-based Sinovation Ventures, is ranked #1 in technology in China by Forbes. Educated as a computer scientist at Columbia and Carnegie Mellon, his distinguished career includes working as a research scientist at Apple; Vice President of the Web Products Division at Silicon Graphics; Corporate Vice President at Microsoft and founder of Microsoft Research Asia in Beijing, one of the world’s top research labs; and then Google Corporate President and President of Google Greater China. As an internet celebrity, he has fifty million+ followers on the Chinese micro-blogging website WeiboAs an author, among his seven bestsellers in the Chinese language, two have sold more than one million copies each. His first book in English is AI Superpowers: China, Silicon Valley, and the New World Order (forthcoming, September). Kai-Fu Lee's Edge Bio page 


Go to stand-alone video: :
 

A Common Sense

[3.15.18]

We need to acknowledge our profound ignorance and begin to craft a culture that will be based on some notion of communalism and interspecies symbiosis rather than survival of the fittest. These concepts are available and fully elaborated by, say, a biologist like Lynn Margulis, but they're still not the central paradigm. They’re still not organizing our research or driving our culture and our cultural evolution. That’s what I’m frustrated with. There’s so much good intellectual work, so much good philosophy, so much good biology—how can we make that more central to what we do? 

CAROLINE A. JONES is professor of art history in the History, Theory, Criticism section of the Department of Architecture at MIT. Caroline A. Jones's Edge Bio page 


Go to stand-alone video: :
 

Pages