TECHNOLOGY

Philip Pushes The Button

[8.29.17]

 

PHILIP BROCKMAN
Rocket Scientist
1937 – 2017


Research physicist Philip Brockman pushes the button to start NASA's MPD-arc plasma accelerator in December 1964.

"While the Hydrodynamics Division sank at Langley, a few new research fields bobbed to the surface to become potent forces in the intellectual life of the laboratory. Most notable of these was magnetoplasmadynamics (MPD)-a genuine product of the space age and an esoteric field of scientific research for an engineering-and applications-oriented place like Langley. If any "mad scientists" were working at Langley in the 1960s, they were the plasma physicists, nuclear fusion enthusiasts, and space-phenomena researchers found in the intense and, for a while, rather glamourous little group investigating MPD. No group of researchers in NASA moved farther away from classical aerodynamics or from the NACA's traditional focus on the problems of airplanes winging their way through the clouds than those involved with MPD." 

—James R. Hansen, from "The Mad Scientists of MPD", Ch. 5, in Spacelight Revolution: NASA Langley Research Center From Sputnik to Apollo (NASA History Series)

Benevolent Artificial Anti-Natalism (BAAN)

[8.7.17]

Obviously, it is an ethical superintelligence not only in terms of mere processing speed, but it begins to arrive at qualitatively new results of what altruism really means. This becomes possible because it operates on a much larger psychological data-base than any single human brain or any scientific community can. Through an analysis of our behaviour and its empirical boundary conditions, it reveals implicit hierarchical relations between our moral values of which we are subjectively unaware, because they are not explicitly represented in our phenomenal self-model. Being the best analytical philosopher that has ever existed, it concludes that, given its current environment, it ought not to act as a maximizer of positive states and happiness, but that it should instead become an efficient minimizer of consciously experienced preference frustration, of pain, unpleasant feelings and suffering. Conceptually, it knows that no entity can suffer from its own non-existence.

The superintelligence concludes that non-existence is in the own best interest of all future self-conscious beings on this planet. Empirically, it knows that naturally evolved biological creatures are unable to realize this fact because of their firmly anchored existence bias. The superintelligence decides to act benevolently.

THOMAS METZINGER is Professor of Theoretical Philosophy at Johannes Gutenberg-Universität Mainz and Adjunct Fellow at the Frankfurt Institute for Advanced Study. He is the author of The Ego Tunnel and editor of open-mind.net and predictive-mind.net. Thomas Metzinger's Edge Bio page

THE REALITY CLUB: Jennifer Jacquet, Nicholas Humphrey

The Threat

Topic: 

  • TECHNOLOGY
https://vimeo.com/214714656

Although a security failure may be due to someone using the wrong type of access control mechanism or weak cypher, the underlying reason for that is very often one of incentives. Fundamentally, the problem is that when Alice guards a system and Bob pays the cost of failure, things break. Put in those terms, it’s simple and straightforward, but it’s often much more complicated when we start looking at how things actually fail in real life.

The Threat

[5.8.17]
People who are able to live digitally enhanced lives, in the sense that they can use all the available tools to the fullest extent, are very much more productive and capable and powerful than those who are still stuck in meatspace. It’s as if you had a forest where all the animals could see only in black and white and, suddenly, along comes a mutation in one of the predators allowing it to see in color. All of a sudden it gets to eat all the other animals, at least those who can’t see in color, and the other animals have no idea what’s going on. They have no idea why their camouflage doesn’t work anymore. They have no idea where the new threat is coming from. That’s the kind of change that happens once people get access to really powerful online services.
 
So long as it was the case that everybody who could be bothered to learn had access to AltaVista, or Google, or Facebook, or whatever, then that was okay. The problem we’re facing now is that more and more capable systems are no longer open to all. They’re open to the government, to big business, and to powerful advertising networks.
 
ROSS ANDERSON is professor of security engineering at Cambridge University, and one of the founders of the field of information security economics. He chairs the Foundation for Information Policy Research, is a fellow of the Royal Society and the Royal Academy of Engineering, and is a winner of the Lovelace Medal, the UK’s top award in computing. Ross Anderson's Edge Bio Page

Defining Intelligence

Topic: 

  • TECHNOLOGY
https://vimeo.com/200202591

I worked on coming up with a method of defining intelligence that would necessarily have a solution, as opposed to being necessarily unsolvable. That was this idea of bounded optimality, which, roughly speaking, says that you have a machine and the machine is finite—it has finite speed and finite memory. That means that there is only a finite set of programs that can run on that machine, and out of that finite set one or some small equivalent class of programs does better than all the others; that’s the program that we should aim for.                                 

The Mind Bleeds Into the World

Topic: 

  • TECHNOLOGY
https://vimeo.com/200182778

Coming very soon is going to be augmented reality technology, where you see the physical world, but also virtual objects and entities that you perceive in the middle of them. We’ll put on augmented reality glasses and we’ll have augmented entities out there. My face recognition is not so great, but my augmented glasses will tell me, "Ah, that’s John Brockman." A bit of AI inside my augmented reality glasses will recognize people for me.                 

The Mind Bleeds Into the World

[1.24.17]

Coming very soon is going to be augmented reality technology, where you see not only the physical world, but also virtual objects and entities that you perceive in the middle of them. We’ll put on augmented reality glasses and we’ll have augmented entities out there. My face recognition is not so great, but my augmented glasses will tell me, "Ah, that’s John Brockman." A bit of AI inside my augmented reality glasses will recognize people for me.

At that level, artificial intelligence will start to become an extension of my mind. I suspect before long we’re all going to become very reliant on this. I’m already very reliant on my smartphone and my computers. These things are going to become more and more ubiquitous parts of our lives. The mind starts bleeding into the world. So many parts of the world are becoming parts of our mind, and eventually we start moving towards this increasingly digital reality. And this raises the question I started with: How real is all of this?

DAVID CHALMERS is University Professor of Philosophy and Neural Science and Co-Director of the Center for Mind, Brain, and Consciousness at New York University. He is also Distinguished Professor of Philosophy at the Australian National University. David Chalmers's Edge Bio Page

REALITY CLUB CONVERSATIONDonald D. Hoffman, Sean Carroll, Steve Omohundro, Thomas Metzinger

Defining Intelligence

[2.7.17]
I worked on coming up with a method of defining intelligence that would necessarily have a solution, as opposed to being necessarily unsolvable. That was this idea of bounded optimality, which, roughly speaking, says that you have a machine and the machine is finite—it has finite speed and finite memory. That means that there is only a finite set of programs that can run on that machine, and out of that finite set one or some small equivalent class of programs does better than all the others; that’s the program that we should aim for.                                 
 
That’s what we call the bounded optimal program for that machine and also for some class of environments that you’re intending to work in. We can make progress there because we can start with very restricted types of machines and restricted kinds of environments and solve the problem. We can say, "Here is, for that machine and this environment, the best possible program that takes into account the fact that the machine doesn’t run infinitely fast. It can only do a certain amount of computation before the world changes." 
 
STUART RUSSELL is a professor of computer science at UC Berkeley and coauthor (with Peter Norvig) of Artificial Intelligence: A Modern Approach. Stuart Russell's Edge Bio Page

How Should a Society Be?

Topic: 

  • TECHNOLOGY
https://vimeo.com/190617534

This is another example where AI, in this case, machine-learning methods, intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair?

How Should a Society Be?

[12.1.16]

This is another example where AI—in this case, machine-learning methods—intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair? We have to define these terms, computationally or numerically.                                 

It’s problematic in the short term because we have no idea what we’re doing; we don’t have a way to approach that problem yet. In the slightly longer term—five or ten years—there’s a profound opportunity to come together as a polis and get precise about what we mean by justice or fairness with respect to certain protected classes. Does that mean it’s got an equal false positive rate? Does that mean it has an equal false negative rate? What is the tradeoff that we’re willing to make? What are the constraints that we want to put on this model-building process? That’s a profound question, and we haven’t needed to address it until now. There’s going to be a civic conversation in the next few years about how to make these concepts explicit.

BRIAN CHRISTIAN is the author of The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive, and coauthor (with Tom Griffiths) of Algorithms to Live By: The Computer Science of Human Decisions. Brian Christian's Edge Bio Page

Pages

Subscribe to RSS - TECHNOLOGY