Master of Ceremonies in the Cyber Salon
By Andrea Köhler 11.3.2017
never really interested him.
'My interests were always strictly cultural.'"
Closing the loop is a phrase used in robotics. Open-loop systems are when you take an action and you can't measure the results—there's no feedback. Closed-loop systems are when you take an action, you measure the results, and you change your action accordingly. Systems with closed loops have feedback loops; they self-adjust and quickly stabilize in optimal conditions. Systems with open loops overshoot; they miss it entirely.
CHRIS ANDERSON is the CEO of 3D Robotics and founder of DIY Drones. He is the former editor-in-chief of Wired magazine. Chris Anderson's Edge Bio Page
Contrary to the standard view of reason as a capacity that enhances the individual in his or her cognitive capacities—the standard image is of Rodin’s "Thinker," thinking on his own and discovering new ideas—what we say now is that the basic functions of reason are social. They have to do with the fact that we interact with each other’s bodies and with each other’s minds. And to interact with other’s minds is to be able to represent a representation that others have, and to have them represent our representations, and also to act on the representation of others and, in some cases, let others act on our own representations.
The kind of achievements that are often cited as the proof that reason is so superior, like scientific achievements, are not achievements of individual minds, not achievements of individual reason, they are collective achievements—typically a product of social interaction over generations. They are social, cultural products, where many minds had to interact in complex ways and progressively explore a lot of directions on which they hit, not because some were more reasonable than others, but because some were luckier than others in what they hit. And then they used their reason to defend what they hit by luck. Reason is a remarkable cognitive capacity, as are so many cognitive capacities in human and animals, but it’s not a superpower.
DAN SPERBER is a Paris-based social and cognitive scientist. He holds an emeritus research professorship at the French Centre National de la Recherche Scientifique (CNRS), Paris, and he is currently at Central European University, Budapest. He is the creator (with Deirdre Wilson) of "Relevance Theory," and coauthor (with Hugo Mercier) of The Enigma of Reason. Dan Sperber's Edge Bio Page
I worked on coming up with a method of defining intelligence that would necessarily have a solution, as opposed to being necessarily unsolvable. That was this idea of bounded optimality, which, roughly speaking, says that you have a machine and the machine is finite—it has finite speed and finite memory. That means that there is only a finite set of programs that can run on that machine, and out of that finite set one or some small equivalent class of programs does better than all the others; that’s the program that we should aim for.
That’s what we call the bounded optimal program for that machine and also for some class of environments that you’re intending to work in. We can make progress there because we can start with very restricted types of machines and restricted kinds of environments and solve the problem. We can say, "Here is, for that machine and this environment, the best possible program that takes into account the fact that the machine doesn’t run infinitely fast. It can only do a certain amount of computation before the world changes."
STUART RUSSELL is a professor of computer science at UC Berkeley and coauthor (with Peter Norvig) of Artificial Intelligence: A Modern Approach. Stuart Russell's Edge Bio Page
Coming very soon is going to be augmented reality technology, where you see not only the physical world, but also virtual objects and entities that you perceive in the middle of them. We’ll put on augmented reality glasses and we’ll have augmented entities out there. My face recognition is not so great, but my augmented glasses will tell me, "Ah, that’s John Brockman." A bit of AI inside my augmented reality glasses will recognize people for me.
At that level, artificial intelligence will start to become an extension of my mind. I suspect before long we’re all going to become very reliant on this. I’m already very reliant on my smartphone and my computers. These things are going to become more and more ubiquitous parts of our lives. The mind starts bleeding into the world. So many parts of the world are becoming parts of our mind, and eventually we start moving towards this increasingly digital reality. And this raises the question I started with: How real is all of this?
DAVID CHALMERS is University Professor of Philosophy and Neural Science and Co-Director of the Center for Mind, Brain, and Consciousness at New York University. He is also Distinguished Professor of Philosophy at the Australian National University. David Chalmers's Edge Bio Page
This is another example where AI—in this case, machine-learning methods—intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair? We have to define these terms, computationally or numerically.
It’s problematic in the short term because we have no idea what we’re doing; we don’t have a way to approach that problem yet. In the slightly longer term—five or ten years—there’s a profound opportunity to come together as a polis and get precise about what we mean by justice or fairness with respect to certain protected classes. Does that mean it’s got an equal false positive rate? Does that mean it has an equal false negative rate? What is the tradeoff that we’re willing to make? What are the constraints that we want to put on this model-building process? That’s a profound question, and we haven’t needed to address it until now. There’s going to be a civic conversation in the next few years about how to make these concepts explicit.
BRIAN CHRISTIAN is the author of The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive, and coauthor (with Tom Griffiths) of Algorithms to Live By: The Computer Science of Human Decisions. Brian Christian's Edge Bio Page