TECHNOLOGY

The Threat

Topic: 

  • TECHNOLOGY
https://vimeo.com/214714656

Although a security failure may be due to someone using the wrong type of access control mechanism or weak cypher, the underlying reason for that is very often one of incentives. Fundamentally, the problem is that when Alice guards a system and Bob pays the cost of failure, things break. Put in those terms, it’s simple and straightforward, but it’s often much more complicated when we start looking at how things actually fail in real life.

The Threat

[5.8.17]
People who are able to live digitally enhanced lives, in the sense that they can use all the available tools to the fullest extent, are very much more productive and capable and powerful than those who are still stuck in meatspace. It’s as if you had a forest where all the animals could see only in black and white and, suddenly, along comes a mutation in one of the predators allowing it to see in color. All of a sudden it gets to eat all the other animals, at least those who can’t see in color, and the other animals have no idea what’s going on. They have no idea why their camouflage doesn’t work anymore. They have no idea where the new threat is coming from. That’s the kind of change that happens once people get access to really powerful online services.
 
So long as it was the case that everybody who could be bothered to learn had access to AltaVista, or Google, or Facebook, or whatever, then that was okay. The problem we’re facing now is that more and more capable systems are no longer open to all. They’re open to the government, to big business, and to powerful advertising networks.
 
ROSS ANDERSON is professor of security engineering at Cambridge University, and one of the founders of the field of information security economics. He chairs the Foundation for Information Policy Research, is a fellow of the Royal Society and the Royal Academy of Engineering, and is a winner of the Lovelace Medal, the UK’s top award in computing. Ross Anderson's Edge Bio Page

Defining Intelligence

Topic: 

  • TECHNOLOGY
https://vimeo.com/200202591

I worked on coming up with a method of defining intelligence that would necessarily have a solution, as opposed to being necessarily unsolvable. That was this idea of bounded optimality, which, roughly speaking, says that you have a machine and the machine is finite—it has finite speed and finite memory. That means that there is only a finite set of programs that can run on that machine, and out of that finite set one or some small equivalent class of programs does better than all the others; that’s the program that we should aim for.                                 

The Mind Bleeds Into the World

Topic: 

  • TECHNOLOGY
https://vimeo.com/200182778

Coming very soon is going to be augmented reality technology, where you see the physical world, but also virtual objects and entities that you perceive in the middle of them. We’ll put on augmented reality glasses and we’ll have augmented entities out there. My face recognition is not so great, but my augmented glasses will tell me, "Ah, that’s John Brockman." A bit of AI inside my augmented reality glasses will recognize people for me.                 

The Mind Bleeds Into the World

[1.24.17]

Coming very soon is going to be augmented reality technology, where you see not only the physical world, but also virtual objects and entities that you perceive in the middle of them. We’ll put on augmented reality glasses and we’ll have augmented entities out there. My face recognition is not so great, but my augmented glasses will tell me, "Ah, that’s John Brockman." A bit of AI inside my augmented reality glasses will recognize people for me.

At that level, artificial intelligence will start to become an extension of my mind. I suspect before long we’re all going to become very reliant on this. I’m already very reliant on my smartphone and my computers. These things are going to become more and more ubiquitous parts of our lives. The mind starts bleeding into the world. So many parts of the world are becoming parts of our mind, and eventually we start moving towards this increasingly digital reality. And this raises the question I started with: How real is all of this?

DAVID CHALMERS is University Professor of Philosophy and Neural Science and Co-Director of the Center for Mind, Brain, and Consciousness at New York University. He is also Distinguished Professor of Philosophy at the Australian National University. David Chalmers's Edge Bio Page

REALITY CLUB CONVERSATIONDonald D. Hoffman, Sean Carroll, Steve Omohundro, Thomas Metzinger

Defining Intelligence

[2.7.17]
I worked on coming up with a method of defining intelligence that would necessarily have a solution, as opposed to being necessarily unsolvable. That was this idea of bounded optimality, which, roughly speaking, says that you have a machine and the machine is finite—it has finite speed and finite memory. That means that there is only a finite set of programs that can run on that machine, and out of that finite set one or some small equivalent class of programs does better than all the others; that’s the program that we should aim for.                                 
 
That’s what we call the bounded optimal program for that machine and also for some class of environments that you’re intending to work in. We can make progress there because we can start with very restricted types of machines and restricted kinds of environments and solve the problem. We can say, "Here is, for that machine and this environment, the best possible program that takes into account the fact that the machine doesn’t run infinitely fast. It can only do a certain amount of computation before the world changes." 
 
STUART RUSSELL is a professor of computer science at UC Berkeley and coauthor (with Peter Norvig) of Artificial Intelligence: A Modern Approach. Stuart Russell's Edge Bio Page

How Should a Society Be?

Topic: 

  • TECHNOLOGY
https://vimeo.com/190617534

This is another example where AI, in this case, machine-learning methods, intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair?

How Should a Society Be?

[12.1.16]

This is another example where AI—in this case, machine-learning methods—intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair? We have to define these terms, computationally or numerically.                                 

It’s problematic in the short term because we have no idea what we’re doing; we don’t have a way to approach that problem yet. In the slightly longer term—five or ten years—there’s a profound opportunity to come together as a polis and get precise about what we mean by justice or fairness with respect to certain protected classes. Does that mean it’s got an equal false positive rate? Does that mean it has an equal false negative rate? What is the tradeoff that we’re willing to make? What are the constraints that we want to put on this model-building process? That’s a profound question, and we haven’t needed to address it until now. There’s going to be a civic conversation in the next few years about how to make these concepts explicit.

BRIAN CHRISTIAN is the author of The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive, and coauthor (with Tom Griffiths) of Algorithms to Live By: The Computer Science of Human Decisions. Brian Christian's Edge Bio Page

Closing the Loop

Topic: 

  • TECHNOLOGY
https://vimeo.com/184581051

Closing the loop is a phrase used in robotics. Open-loop systems are when you take an action and you can't measure the results—there's no feedback. Closed-loop systems are when you take an action, you measure the results, and you change your action accordingly. Systems with closed loops have feedback loops, they self-adjust and quickly stabilize in optimal conditions. Systems with open loops overshoot; they miss it entirely.         

Closing the Loop

[3.7.17]
Closing the loop is a phrase used in robotics. Open-loop systems are when you take an action and you can't measure the results—there's no feedback. Closed-loop systems are when you take an action, you measure the results, and you change your action accordingly. Systems with closed loops have feedback loops; they self-adjust and quickly stabilize in optimal conditions. Systems with open loops overshoot; they miss it entirely.
 
CHRIS ANDERSON is the CEO of 3D Robotics and founder of DIY Drones. He is the former editor-in-chief of Wired magazine. Chris Anderson's Edge Bio Page

Pages

Subscribe to RSS - TECHNOLOGY