Videos by topic: TECHNOLOGY

Mining the Computational Universe

Stephen Wolfram
[5.30.19]

I've spent several decades creating a computational language that aims to give a precise symbolic representation for computational thinking, suitable for use by both humans and machines. I'm interested in figuring out what can happen when a substantial fraction of humans can communicate in computational language as well as human language. It's clear that the introduction of both human spoken language and human written language had important effects on the development of civilization. What will now happen (for both humans and AI) when computational language spreads?

STEPHEN WOLFRAM is a scientist, inventor, and the founder and CEO of Wolfram Research. He is the creator of the symbolic computation program Mathematica and its programming language, Wolfram Language, as well as the knowledge engine Wolfram|Alpha. He is also the author of A New Kind of Science. Stephen Wolfram's Edge Bio Page


Go to stand-alone video: :
 

How Technology Changes Our Concept of the Self

Peter Galison
[11.20.18]

The general project that I’m working on is about the self and technology—what we understand by the self and how it’s changed over time. My sense is that the self is not a universal and purely abstract thing that you’re going to get at through a philosophy of principles. Here’s an example: Sigmund Freud considered his notion of psychic censorship (of painful or forbidden thoughts) to be one of his greatest contributions to his account of who we are. His thoughts about these ideas came early, using as a model the specific techniques that Czarist border guards used to censor the importation of potentially dangerous texts into Russia. Later, Freud began to think of the censoring system in Vienna during World War I—techniques applied to every letter, postcard, telegram and newspaper—as a way of getting at what the mind does. Another example: Cyberneticians came to a different notion of self, accessible from the outside, identified with feedback systems—an account of the self that emerged from Norbert Wiener’s engineering work on weapons systems during World War II. Now I see a new notion of the self emerging; we start by modeling artificial intelligence on a conception of who we are, and then begin seeing ourselves ever more in our encounter with AI.

PETER GALISON is the Joseph Pellegrino University Professor of the History of Science and of Physics at Harvard University and Director of the Collection of Historical Scientific Instruments. Peter Galison's Edge Bio Page

 


 

Collective Awareness

J. Doyne Farmer, Don Ross
[10.3.18]

Economic failures cause us serious problems. We need to build simulations of the economy at a much more fine-grained level that take advantage of all the data that computer technologies and the Internet provide us with. We need new technologies of economic prediction that take advantage of the tools we have in the 21st century.  

Places like the US Federal Reserve Bank make predictions using a system that has been developed over the last eighty years or so. This line of effort goes back to the middle of the 20th century, when people realized that we needed to keep track of the economy. They began to gather data and set up a procedure for having firms fill out surveys, for having the census take data, for collecting a lot of data on economic activity and processing that data. This system is called “national accounting,” and it produces numbers like GDP, unemployment, and so on. The numbers arrive at a very slow timescale. Some of the numbers come out once a quarter, some of the numbers come out once a year. The numbers are typically lagged because it takes a lot of time to process the data, and the numbers are often revised as much as a year or two later. That system has been built to work in tandem with the models that have been built, which also process very aggregated, high-level summaries of what the economy is doing. The data is old fashioned and the models are old fashioned.

It's a 20th-century technology that's been refined in the 21st century. It's very useful, and it represents a high level of achievement, but it is now outdated. The Internet and computers have changed things. With the Internet, we can gather rich, detailed data about what the economy is doing at the level of individuals. We don't have to rely on surveys; we can just grab the data. Furthermore, with modern computer technology we could simulate what 300 million agents are doing, simulate the economy at the level of the individuals. We can simulate what every company is doing and what every bank is doing in the United States. The model we could build could be much, much better than what we have now. This is an achievable goal.

But we're not doing that, nothing close to that. We could achieve what I just said with a technological system that’s simpler than Google search. But we’re not doing that. We need to do it. We need to start creating a new technology for economic prediction that runs side-by-side with the old one, that makes its predictions in a very different way. This could give us a lot more guidance about where we're going and help keep the economic shit from hitting the fan as often as it does.

J. DOYNE FARMER is director of the Complexity Economics programme at the Institute for New Economic Thinking at the Oxford Martin School, professor in the Mathematical Institute at the University of Oxford, and an external professor at the Santa Fe Institute. He was a co-founder of Prediction Company, a quantitative automated trading firm that was sold to the United Bank of Switzerland in 2006. J. Doyne Farmer's Edge Bio Page


Go to stand-alone video: :
 

The Space of Possible Minds

Murray Shanahan
[5.18.18]

Aaron Sloman, the British philosopher, has this great phrase: the space of possible minds. The idea is that the space of possible minds encompasses not only the biological minds that have arisen on this earth, but also extraterrestrial intelligence, and whatever forms of biological or evolved intelligence are possible but have never occurred, and artificial intelligence in the whole range of possible ways we might build AI.

I love this idea of the space of possible minds, trying to understand the structure of the space of possible minds in some kind of principled way. How is consciousness distributed through this space of possible minds? Is something that has a sufficiently high level of intelligence necessarily conscious? Is consciousness a prerequisite for human-level intelligence or general intelligence? I tend to think the answer to that is no, but it needs to be fleshed out a little bit. We need to break down the concept of consciousness into different aspects, all of which tend to occur together in humans, but can occur independently, or some subset of these can occur on its own in an artificial intelligence. Maybe we can build an AI that clearly has an awareness and understanding of the world. We very much want to say, "It's conscious of its surroundings, but it doesn't experience any emotion and is not capable of suffering." We can imagine building something that has some aspects of consciousness and lacks others.

MURRAY SHANAHAN is a professor of cognitive robotics at Imperial College London and a senior research scientist at DeepMind. Murray Shanahan's Edge Bio Page


Go to stand-alone video: :
 

We Are Here To Create

Kai-Fu Lee
[3.26.18]

My original dream of finding who we are and why we exist ended up in a failure. Even though we invented all these wonderful tools that will be great for our future, for our kids, for our society, we have not figured out why humans exist. What is interesting for me is that in understanding that these AI tools are doing repetitive tasks, it certainly comes back to tell us that doing repetitive tasks can’t be what makes us humans. The arrival of AI will at least remove what cannot be our reason for existence on this earth. If that’s half of our job tasks, then that’s half of our time back to thinking about why we exist. One very valid reason for existing is that we are here to create. What AI cannot do is perhaps a potential reason for why we exist. One such direction is that we create. We invent things. We celebrate creation. We’re very creative about scientific process, about curing diseases, about writing books, writing movies, creative about telling stories, doing a brilliant job in marketing. This is our creativity that we should celebrate, and that’s perhaps what makes us human.

KAI-FU LEE, the founder of the Beijing-based Sinovation Ventures, is ranked #1 in technology in China by Forbes. Educated as a computer scientist at Columbia and Carnegie Mellon, his distinguished career includes working as a research scientist at Apple; Vice President of the Web Products Division at Silicon Graphics; Corporate Vice President at Microsoft and founder of Microsoft Research Asia in Beijing, one of the world’s top research labs; and then Google Corporate President and President of Google Greater China. As an internet celebrity, he has fifty million+ followers on the Chinese micro-blogging website WeiboAs an author, among his seven bestsellers in the Chinese language, two have sold more than one million copies each. His first book in English is AI Superpowers: China, Silicon Valley, and the New World Order (forthcoming, September). Kai-Fu Lee's Edge Bio page 


Go to stand-alone video: :
 

The Human Strategy

Alex "Sandy" Pentland
[10.30.17]

The idea of a credit assignment function, reinforcing “neurons” that work, is the core of current AI. And if you make those little neurons that get reinforced smarter, the AI gets smarter. So, what would happen if the neurons were people? People have lots of capabilities; they know lots of things about the world; they can perceive things in a human way. What would happen if you had a network of people where you could reinforce the ones that were helping and maybe discourage the ones that weren't?

That begins to sound like a society or a company. We all live in a human social network. We're reinforced for things that seem to help everybody and discouraged from things that are not appreciated. Culture is something that comes from a sort of human AI, the function of reinforcing the good and penalizing the bad, but applied to humans and human problems. Once you realize that you can take this general framework of AI and create a human AI, the question becomes, what's the right way to do that? Is it a safe idea? Is it completely crazy?

ALEX "SANDY" PENTLAND is a professor at MIT, and director of the MIT Connection Science and Human Dynamics labs. He is a founding member of advisory boards for Google, AT&T, Nissan, and the UN Secretary General. He is the author of Social Physics, and Honest Signal. Sandy Pentland's Edge Bio page


Go to stand-alone video: :
 

Reality is an Activity of the Most August Imagination

Tim O'Reilly
[10.2.17]

Wallace Stevens had an immense insight into the way that we write the world. We don't just read it, we don't just see it, we don't just take it in. In "An Ordinary Evening in New Haven," he talks about the dialogue between what he calls the Naked Alpha and the hierophant Omega, the beginning, the raw stuff of reality, and what we make of it. He also said reality is an activity of the most august imagination.

Our job is to imagine a better future, because if we can imagine it, we can create it. But it starts with that imagination. The future that we can imagine shouldn't be a dystopian vision of robots that are wiping us out, of climate change that is going to destroy our society. It should be vision of how we will rise to the challenges that we face in the next century, and that we will build enduring civilization, and we will build a world that is better for our children and grandchildren and great-grandchildren. That we will become one of those long-lasting species rather than a flash in the pan that wipes itself out because of its lack of foresight.

We are at a critical moment in human history. In the small, we are at a critical moment in our economy, where we have to make it work better for everyone, not just for a select few. But in the large, we have to make it better in the way that we deal with long-term challenges and long-term problems.

TIM O'REILLY is the founder and CEO of O'Reilly Media, Inc., and the author of WTF?: What’s the Future and Why It’s Up to Us. Tim O'Reilly's Edge Bio page


 

The Threat

Ross Anderson
[5.8.17]

Although a security failure may be due to someone using the wrong type of access control mechanism or weak cypher, the underlying reason for that is very often one of incentives. Fundamentally, the problem is that when Alice guards a system and Bob pays the cost of failure, things break. Put in those terms, it’s simple and straightforward, but it’s often much more complicated when we start looking at how things actually fail in real life.

ROSS ANDERSON is a professor of security engineering at Cambridge University, and one of the founders of the field of information security economics. He chairs the Foundation for Information Policy Research, and is a fellow of the Royal Society and the Royal Academy of Engineering. Ross Anderson's Edge Bio Page


Go to stand-alone video: :
 

Closing the Loop

Chris Anderson
[3.7.17]

Closing the loop is a phrase used in robotics. Open-loop systems are when you take an action and you can't measure the results—there's no feedback. Closed-loop systems are when you take an action, you measure the results, and you change your action accordingly. Systems with closed loops have feedback loops, they self-adjust and quickly stabilize in optimal conditions. Systems with open loops overshoot; they miss it entirely.         

CHRIS ANDERSON is the CEO of 3D Robotics and founder of DIY Drones. He is the former editor-in-chief of Wired magazine. Chris Anderson's Edge Bio Page

 


Go to stand-alone video: :
 

Defining Intelligence

Stuart Russell
[2.7.17]

I worked on coming up with a method of defining intelligence that would necessarily have a solution, as opposed to being necessarily unsolvable. That was this idea of bounded optimality, which, roughly speaking, says that you have a machine and the machine is finite—it has finite speed and finite memory. That means that there is only a finite set of programs that can run on that machine, and out of that finite set one or some small equivalent class of programs does better than all the others; that’s the program that we should aim for.                                 

That’s what we call the bounded optimal program for that machine and also for some class of environments that you’re intending to work in. We can make progress there because we can start with very restricted types of machines and restricted kinds of environments and solve the problem. We can say, "Here is, for that machine and this environment, the best possible program that takes into account the fact that the machine doesn’t run infinitely fast. It can only do a certain amount of computation before the world changes." 

STUART RUSSELL is a professor of computer science at UC Berkeley and coauthor (with Peter Norvig) of Artificial Intelligence: A Modern Approach. Stuart Russell's Edge Bio Page

 


Go to stand-alone video: :
 

Pages