Videos by topic: TECHNOLOGY

The Space of Possible Minds

[5.18.18]

Aaron Sloman, the British philosopher, has this great phrase: the space of possible minds. The idea is that the space of possible minds encompasses not only the biological minds that have arisen on this earth, but also extraterrestrial intelligence, and whatever forms of biological or evolved intelligence are possible but have never occurred, and artificial intelligence in the whole range of possible ways we might build AI.

I love this idea of the space of possible minds, trying to understand the structure of the space of possible minds in some kind of principled way. How is consciousness distributed through this space of possible minds? Is something that has a sufficiently high level of intelligence necessarily conscious? Is consciousness a prerequisite for human-level intelligence or general intelligence? I tend to think the answer to that is no, but it needs to be fleshed out a little bit. We need to break down the concept of consciousness into different aspects, all of which tend to occur together in humans, but can occur independently, or some subset of these can occur on its own in an artificial intelligence. Maybe we can build an AI that clearly has an awareness and understanding of the world. We very much want to say, "It's conscious of its surroundings, but it doesn't experience any emotion and is not capable of suffering." We can imagine building something that has some aspects of consciousness and lacks others.

MURRAY SHANAHAN is a professor of cognitive robotics at Imperial College London and a senior research scientist at DeepMind. Murray Shanahan's Edge Bio Page


Go to stand-alone video: :
 

How To Be a Systems Thinker

[4.17.18]

Until fairly recently, the artificial intelligence didn’t learn. To create a machine that learns to think more efficiently was a big challenge. In the same sense, one of the things that I wonder is how we'll be able to teach a machine to know what it doesn’t know and that it might need to know in order to address a particular issue productively and insightfully. This is a huge problem for human beings. It takes a while for us to learn to solve problems. And then it takes even longer for us to realize what we don’t know that we would need to know to solve a particular problem, which obviously involves a lot of complexity.  

How do you deal with ignorance? I don’t mean how do you shut ignorance out. Rather, how do you deal with an awareness of what you don’t know, and you don’t know how to know, in dealing with a particular problem? When Gregory Bateson was arguing about human purposes, that was where he got involved in environmentalism. We were doing all sorts of things to the planet we live on without recognizing what the side effects would be and the interactions. Although, at that point we were thinking more about side effects than about interactions between multiple processes. Once you begin to understand the nature of side effects, you ask a different set of questions before you make decisions and projections and analyze what’s going to happen.

MARY CATHERINE BATESON is a writer and cultural anthropologist. In 2004 she retired from her position as Clarence J. Robinson Professor in Anthropology and English at George Mason University, and is now Professor Emerita. Mary Catherine Bateson's Edge Bio


Go to stand-alone video: :
 

We Are Here To Create

[3.26.18]

My original dream of finding who we are and why we exist ended up in a failure. Even though we invented all these wonderful tools that will be great for our future, for our kids, for our society, we have not figured out why humans exist. What is interesting for me is that in understanding that these AI tools are doing repetitive tasks, it certainly comes back to tell us that doing repetitive tasks can’t be what makes us humans. The arrival of AI will at least remove what cannot be our reason for existence on this earth. If that’s half of our job tasks, then that’s half of our time back to thinking about why we exist. One very valid reason for existing is that we are here to create. What AI cannot do is perhaps a potential reason for why we exist. One such direction is that we create. We invent things. We celebrate creation. We’re very creative about scientific process, about curing diseases, about writing books, writing movies, creative about telling stories, doing a brilliant job in marketing. This is our creativity that we should celebrate, and that’s perhaps what makes us human.

KAI-FU LEE, the founder of the Beijing-based Sinovation Ventures, is ranked #1 in technology in China by Forbes. Educated as a computer scientist at Columbia and Carnegie Mellon, his distinguished career includes working as a research scientist at Apple; Vice President of the Web Products Division at Silicon Graphics; Corporate Vice President at Microsoft and founder of Microsoft Research Asia in Beijing, one of the world’s top research labs; and then Google Corporate President and President of Google Greater China. As an internet celebrity, he has fifty million+ followers on the Chinese micro-blogging website WeiboAs an author, among his seven bestsellers in the Chinese language, two have sold more than one million copies each. His first book in English is AI Superpowers: China, Silicon Valley, and the New World Order (forthcoming, September). Kai-Fu Lee's Edge Bio page 


Go to stand-alone video: :
 

The Human Strategy

[10.30.17]

The idea of a credit assignment function, reinforcing “neurons” that work, is the core of current AI. And if you make those little neurons that get reinforced smarter, the AI gets smarter. So, what would happen if the neurons were people? People have lots of capabilities; they know lots of things about the world; they can perceive things in a human way. What would happen if you had a network of people where you could reinforce the ones that were helping and maybe discourage the ones that weren't?

That begins to sound like a society or a company. We all live in a human social network. We're reinforced for things that seem to help everybody and discouraged from things that are not appreciated. Culture is something that comes from a sort of human AI, the function of reinforcing the good and penalizing the bad, but applied to humans and human problems. Once you realize that you can take this general framework of AI and create a human AI, the question becomes, what's the right way to do that? Is it a safe idea? Is it completely crazy?

ALEX "SANDY" PENTLAND is a professor at MIT, and director of the MIT Connection Science and Human Dynamics labs. He is a founding member of advisory boards for Google, AT&T, Nissan, and the UN Secretary General. He is the author of Social Physics, and Honest Signal. Sandy Pentland's Edge Bio page


Go to stand-alone video: :
 

Reality is an Activity of the Most August Imagination

[10.2.17]

Wallace Stevens had an immense insight into the way that we write the world. We don't just read it, we don't just see it, we don't just take it in. In "An Ordinary Evening in New Haven," he talks about the dialogue between what he calls the Naked Alpha and the hierophant Omega, the beginning, the raw stuff of reality, and what we make of it. He also said reality is an activity of the most august imagination.

Our job is to imagine a better future, because if we can imagine it, we can create it. But it starts with that imagination. The future that we can imagine shouldn't be a dystopian vision of robots that are wiping us out, of climate change that is going to destroy our society. It should be vision of how we will rise to the challenges that we face in the next century, and that we will build enduring civilization, and we will build a world that is better for our children and grandchildren and great-grandchildren. That we will become one of those long-lasting species rather than a flash in the pan that wipes itself out because of its lack of foresight.

We are at a critical moment in human history. In the small, we are at a critical moment in our economy, where we have to make it work better for everyone, not just for a select few. But in the large, we have to make it better in the way that we deal with long-term challenges and long-term problems.

TIM O'REILLY is the founder and CEO of O'Reilly Media, Inc., and the author of WTF?: What’s the Future and Why It’s Up to Us. Tim O'Reilly's Edge Bio page


 

The Threat

[5.8.17]

Although a security failure may be due to someone using the wrong type of access control mechanism or weak cypher, the underlying reason for that is very often one of incentives. Fundamentally, the problem is that when Alice guards a system and Bob pays the cost of failure, things break. Put in those terms, it’s simple and straightforward, but it’s often much more complicated when we start looking at how things actually fail in real life.

ROSS ANDERSON is a professor of security engineering at Cambridge University, and one of the founders of the field of information security economics. He chairs the Foundation for Information Policy Research, and is a fellow of the Royal Society and the Royal Academy of Engineering. Ross Anderson's Edge Bio Page


Go to stand-alone video: :
 

Closing the Loop

[3.7.17]

Closing the loop is a phrase used in robotics. Open-loop systems are when you take an action and you can't measure the results—there's no feedback. Closed-loop systems are when you take an action, you measure the results, and you change your action accordingly. Systems with closed loops have feedback loops, they self-adjust and quickly stabilize in optimal conditions. Systems with open loops overshoot; they miss it entirely.         

CHRIS ANDERSON is the CEO of 3D Robotics and founder of DIY Drones. He is the former editor-in-chief of Wired magazine. Chris Anderson's Edge Bio Page

 


Go to stand-alone video: :
 

Defining Intelligence

[2.7.17]

I worked on coming up with a method of defining intelligence that would necessarily have a solution, as opposed to being necessarily unsolvable. That was this idea of bounded optimality, which, roughly speaking, says that you have a machine and the machine is finite—it has finite speed and finite memory. That means that there is only a finite set of programs that can run on that machine, and out of that finite set one or some small equivalent class of programs does better than all the others; that’s the program that we should aim for.                                 

That’s what we call the bounded optimal program for that machine and also for some class of environments that you’re intending to work in. We can make progress there because we can start with very restricted types of machines and restricted kinds of environments and solve the problem. We can say, "Here is, for that machine and this environment, the best possible program that takes into account the fact that the machine doesn’t run infinitely fast. It can only do a certain amount of computation before the world changes." 

STUART RUSSELL is a professor of computer science at UC Berkeley and coauthor (with Peter Norvig) of Artificial Intelligence: A Modern Approach. Stuart Russell's Edge Bio Page

 


Go to stand-alone video: :
 

The Mind Bleeds Into the World

[1.24.17]

Coming very soon is going to be augmented reality technology, where you see the physical world, but also virtual objects and entities that you perceive in the middle of them. We’ll put on augmented reality glasses and we’ll have augmented entities out there. My face recognition is not so great, but my augmented glasses will tell me, "Ah, that’s John Brockman." A bit of AI inside my augmented reality glasses will recognize people for me.                 

At that level, artificial intelligence will start to become an extension of my mind. I suspect before long we’re all going to become very reliant on this. I’m already very reliant on my smartphone and my computers. These things are going to become more and more ubiquitous parts of our lives. The mind starts bleeding into the world. So many parts of the world are becoming parts of our mind, and eventually we start moving towards this increasingly digital reality. And this raises the question I started with: How real is all of this?

DAVID CHALMERS is University Professor of Philosophy and Neural Science and Co-Director of the Center for Mind, Brain, and Consciousness at New York University, and also Distinguished Professor of Philosophy at the Australian National University. David Chalmers's Edge Bio Page


Go to stand-alone video: :
 

How Should a Society Be?

[12.1.16]

This is another example where AI, in this case, machine-learning methods, intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair? We have to define these terms, computationally or numerically.                                 

It’s problematic in the short term because we have no idea what we’re doing; we don’t have a way to approach that problem yet. In the slightly longer term—five or ten years—there’s a profound opportunity to come together as a polis and get precise about what we mean by justice or fairness with respect to certain protected classes. Does that mean it’s got an equal false positive rate? Does that mean it has an equal false negative rate? What is the tradeoff that we’re willing to make? What are the constraints that we want to put on this model-building process? That’s a profound question, and we haven’t needed to address it until now. There’s going to be a civic conversation in the next few years about how to make these concepts explicit.

BRIAN CHRISTIAN is the author of The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive, and coauthor (with Tom Griffiths) of Algorithms to Live By: The Computer Science of Human Decisions. Brian Christian's Edge Bio Page

 


Go to stand-alone video: :
 

Pages