TECHNOLOGY

AI & The Future Of Civilization

Topic: 

  • TECHNOLOGY
https://vimeo.com/153702764

The question is, what makes us different from all these things? What makes us different is the particulars of our history, which gives us our notions of purpose and goals. That's a long way of saying when we have the box on the desk that thinks as well as any brain does, the thing it doesn't have, intrinsically, is the goals and purposes that we have. Those are defined by our particulars—our particular biology, our particular psychology, our particular cultural history.

AI & The Future Of Civilization

[3.1.16]


What makes us different from all these things? What makes us different is the particulars of our history, which gives us our notions of purpose and goals. That's a long way of saying when we have the box on the desk that thinks as well as any brain does, the thing it doesn't have, intrinsically, is the goals and purposes that we have. Those are defined by our particulars—our particular biology, our particular psychology, our particular cultural history.

The thing we have to think about as we think about the future of these things is the goals. That's what humans contribute, that's what our civilization contributes—execution of those goals; that's what we can increasingly automate. We've been automating it for thousands of years. We will succeed in having very good automation of those goals. I've spent some significant part of my life building technology to essentially go from a human concept of a goal to something that gets done in the world.

There are many questions that come from this. For example, we've got these great AIs and they're able to execute goals, how do we tell them what to do?...

STEPHEN WOLFRAM, distinguished scientist, inventor, author, and business leader, is Founder & CEO, Wolfram Research; Creator, Mathematica, Wolfram|Alpha & the Wolfram Language; Author, A New Kind of Science. Stephen Wolfram's Edge Bio Page

THE REALITY CLUB: Nicholas Carr, Ed Regis

ED. NOTE: From an unsolicited email: "For me, watching the video in small bites gave me the same thrill as reading JJ Ulysses I looked at the screen and clapped aloud." 

The Next Wave

[7.16.15]

This can't be the end of human evolution. We have to go someplace else.                                 

It's quite remarkable. It's moved people off of personal computers. Microsoft's business, while it's a huge monopoly, has stopped growing. There was this platform change. I'm fascinated to see what the next platform is going to be. It's totally up in the air, and I think that some form of augmented reality is possible and real. Is it going to be a science-fiction utopia or a science-fiction nightmare? It's going to be a little bit of both.                              

JOHN MARKOFF is a Pulitzer Prize-winning journalist who covers science and technology for The New York Times. His most recent book is the forthcoming Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots. John Markoff's Edge Bio Page


THE NEXT WAVE

I'm in an interesting place in my career, and it's an interesting time in Silicon Valley. I grew up in Silicon Valley, but it's something I've been reporting about since 1977, which is this Moore's Law acceleration. Over the last five years, another layer has been added to the Moore's Law discussion, with Kurzweil and people like him arguing that we're on the brink of self-aware machines. Just recently, Gates and Musk and Hawking have all been saying that this is an existential threat to humankind. I simply don't see it. If you begin to pick it apart, their argument and the fundamental argument of Silicon Valley, it's all about this exponential acceleration that comes out of the semiconductor industry. I suddenly discovered it was over.                

Now, it may not be over forever, but it's clearly paused. All the things that have been driving everything that I do, the kinds of technology that have emerged out of here that have changed the world, have ridden on the fact that the cost of computing doesn't just fall, it falls at an accelerating rate. And guess what? In the last two years, the price of each transistor has stopped falling. That's a profound moment. 

Existential Risk

[4.16.15]

The reasons why I'm engaged in trying to lower the existential risks has to do with the fact that I'm a convinced consequentialist. We have to take responsibility for modeling the consequences of our actions, and then pick the actions that yield the best outcomes. Moreover, when you start thinking about—in the pallet of actions that you have—what are the things that you should pay special attention to, one argument that can be made is that you should pay attention to areas where you expect your marginal impact to be the highest. There are clearly very important issues about inequality in the world, or global warming, but I couldn't make a significant difference in these areas. 

JAAN TALLINN is a co-founder of The Centre for the Study of Existential Risk at University of Cambridge, UK as well as The Future of Life Institute in Cambridge, MA. He is also a founding engineer of Kazaa and Skype. Jaan Tallinn's Edge Bio Page

INTRODUCTION
by Max Tegmark

I find Jaan Tallinn remarkable in more ways than one. His rags-to-riches entrepreneur story is inspiring in its own right, starting behind the Iron Curtain and ending up connecting the world with Skype. How many times have you skyped? How many people do you know who created a new verb?

Most successful entrepreneurs I know went on to become serial entrepreneurs. In contrast, Jaan chose a different path: he asked himself how he could leverage his success to do as much good as possible in the world, developed a plan, and dedicated his life to it. His ambition makes even the goals of Skype seem modest: reduce existential risk, i.e., the risk that we humans do something as stupid as go extinct due to poor planning. 

Already after a few short years, Jaan’s impact is remarkable. He is a key supporter of a global network of non-profit existential risk organizations including The Future of Humanity Institute, The Machine Intelligence Research Institute, The Global Catastrophic Risk Institute, The Centre for the Study of Existential Risk at University of Cambridge, and The Future of Life Institute, the last two of which he co-founded.

I’ve had the pleasure to work with him on The Future of Life Institute from day one, and if you’ve heard of our recent conference, open letter and well-funded research program on keeping artificial intelligence beneficial, then I’d like to make clear that none of this would have happened if it weren’t for Jaan’s support. If you’re an intelligent life form reading this text millions of years from now and marvel at how life is flourishing, you may owe your existence to Jaan.

MAX TEGMARK is a Physicist, MIT; Researcher, Precision Cosmology; Founder, Future of Life Institute; Author, Our Mathematical Universe. Max Tegmark's Edge Bio Page


EXISTENTIAL RISK

I split my activity between various organizations. I don't have one big umbrella organization that I represent. I use various commercial organizations and investment companies such as Metaplanet Holdings, which is my primary investment vehicle,to invest in various startups, including artificial intelligence companies. Then I have one nonprofit foundation called Solenum Foundation that I use to support various so-called existential risk organizations around the world.

Existential Risk

Topic: 

  • TECHNOLOGY
https://vimeo.com/124955878

The reasons why I'm engaged in trying to lower the existential risks has to do with the fact that I'm a convinced consequentialist. We have to take responsibility for modeling the consequences of our actions, and then pick the actions that yield the best outcomes. Moreover, when you start thinking about—in the pallet of actions that you have—what are the things that you should pay special attention to, one argument that can be made is that you should pay attention to areas where you expect your marginal impact to be the highest.

Digital Reality

Topic: 

  • TECHNOLOGY
https://vimeo.com/117833793

...Today, you can send a design to a fab lab and you need ten different machines to turn the data into something. Twenty years from now, all of that will be in one machine that fits in your pocket. This is the sense in which it doesn't matter. You can do it today. How it works today isn't how it's going to work in the future but you don't need to wait twenty years for it. Anybody can make almost anything almost anywhere.              

Digital Reality

[1.23.15]

...Today, you can send a design to a fab lab and you need ten different machines to turn the data into something. Twenty years from now, all of that will be in one machine that fits in your pocket. This is the sense in which it doesn't matter. You can do it today. How it works today isn't how it's going to work in the future but you don't need to wait twenty years for it. Anybody can make almost anything almost anywhere.              

...Finally, when I could own all these machines I got that the Renaissance was when the liberal arts emerged—liberal for liberation, humanism, the trivium and the quadrivium—and those were a path to liberation, they were the means of expression. That's the moment when art diverged from artisans. And there were the illiberal arts that were for commercial gain. ... We've been living with this notion that making stuff is an illiberal art for commercial gain and it's not part of means of expression. But, in fact, today, 3D printing, micromachining, and microcontroller programming are as expressive as painting paintings or writing sonnets but they're not means of expression from the Renaissance. We can finally fix that boundary between art and artisans.

...I'm happy to take claim for saying computer science is one of the worst things to happen to computers or to science because, unlike physics, it has arbitrarily segregated the notion that computing happens in an alien world.

NEIL GERSHENFELD is a Physicist and the Director of MIT's Center for Bits and Atoms. He is the author of FAB. Neil Gershenfeld's Edge Bio Page


DIGITAL REALITY

What interests me is how bits and atoms relate—the boundary between digital and physical. Scientifically, it's the most exciting thing I know. It has all sorts of implications that are widely covered almost exactly backwards. Playing it out, what I thought was hard technically is proving to be pretty easy. What I didn't think was hard was the implications for the world, so a bigger piece of what I do now is that. Let's start with digital.

Digital is everywhere; digital is everything. There's a lot of hubbub about what's the next MIT, what's the next Silicon Valley, and those were all the last war. Technology is leading to very different answers. To explain that, let's go back to the science underneath it and then look at what it leads to.

The Myth Of AI

[11.14.14]

The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture—that's been the most wealthy, prolific, and influential subculture in the technical world—that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us. ...That mythology, in turn, has spurred a reactionary, perpetual spasm from people who are horrified by what they hear. You'll have a figure say, "The computers will take over the Earth, but that's a good thing, because people had their chance and now we should give it to the machines." Then you'll have other people say, "Oh, that's horrible, we must stop these computers." Most recently, some of the most beloved and respected figures in the tech and science world, including Stephen Hawking and Elon Musk, have taken that position of: "Oh my God, these things are an existential threat. They must be stopped."

In the history of organized religion, it's often been the case that people have been disempowered precisely to serve what was perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity. ... That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allows the data schemes to operate, contributing to the fortunes of whoever runs the computers. You're saying, "Well, but they're helping the AI, it's not us, they're helping the AI." It reminds me of somebody saying, "Oh, build these pyramids, it's in the service of this deity," and, on the ground, it's in the service of an elite. It's an economic effect of the new idea. The new religious idea of AI is a lot like the economic effect of the old idea, religion.


[39:47]

JARON LANIER is a Computer Scientist; Musician; Author of Who Owns the Future? Jaron Lanier's Edge Bio Page

THE REALITY CLUB: George Church, Peter Diamandis, Lee Smolin, Rodney Brooks, Nathan Myhrvold, George Dyson, Pamela McCorduck, Sendhil Mullainathan, Steven Pinker, Neal Gershenfeld, D.A. Wallach, Michael Shermer, Stuart Kauffman, Kevin Kelly, Lawrence Krauss, Robert Provine, Stuart Russell, Kai Krause 

INTRODUCTION

by John Brockman

This past weekend, during a trip to San Francisco, Jaron Lanier stopped by to talk to me for an Edge feature. He had something on his mind: news reports about comments by Elon Musk and Stephen Hawking, two of the most highly respected and distinguished members of the science and technology communiity, on the dangers of AI. ("Elon Musk, Stephen Hawking and fearing the machine" by Alan Wastler, CNBC 6.21.14). He then talked, uninterrupted, for an hour. 

As Lanier was about to depart, John Markoffthe Pulitzer Prize-winning technology correspondent for THE NEW YORK TIMES, arrived. Informed of the topic of the previous hour's conversation, he said, "I have a piece in the paper next week. Read it." A few days later, his article, "Fearing Bombs That Can Pick Whom to Kill" (11.12.14), appeared on the front page. It's one of a continuing series of articles by Markoff pointing to the darker side of the digital revolution.

This is hardly new territory. Cambridge cosmologist Martin Rees, the former Astronomer Royal and President of the Royal Society, addressed similar topics in his 2004 book, Our Final Hour: A Scientist's Warning, as did computer scientist, Bill Joy, co-founder of Sun Microsystems, in his highly influential 2000 article in Wired"Why The Future Doesn't Need Us: Our most powerful 21st-century technologies — robotics, genetic engineering, and nanotech — are threatening to make humans an endangered species."

But these topics are back on the table again, and informing the conversation in part is Superintelligence: Paths, Dangers, Strategies, the recently published book by Nick Bostrom, founding director of Oxford University’s Institute for the Future of Humanity. In his book, Bostrom asks questions such as "what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?" 

I am encouraging, and hope to publish, a Reality Club conversation, with comments (up to 500 words) on, but not limited to, Lanier's piece. This is a very broad topic that involves many different scientific fields and I am sure the Edgies will have lots of interesting things to say. 

—JB

Related on Edge:

Jaron Lanier: "Digital Maoism: The Hazards of the New Online Collectivism" (2006) "One Half A Manifesto" (2000) 
Kevin Kelly: "The Technium" (2014) 
George Dyson: "Turing's Cathedral" (2004) 


THE MYTH OF AI

A lot of us were appalled a few years ago when the American Supreme Court decided, out of the blue, to decide a question it hadn't been asked to decide, and declare that corporations are people. That's a cover for making it easier for big money to have an influence in politics. But there's another angle to it, which I don't think has been considered as much: the tech companies, which are becoming the most profitable, the fastest rising, the richest companies, with the most cash on hand, are essentially people for a different reason than that. They might be people because the Supreme Court said so, but they're essentially algorithms.

If you look at a company like Google or Amazon and many others, they do a little bit of device manufacture, but the only reason they do is to create a channel between people and algorithms. And the algorithms run on these big cloud computer facilities.

The distinction between a corporation and an algorithm is fading. Does that make an algorithm a person? Here we have this interesting confluence between two totally different worlds. We have the world of money and politics and the so-called conservative Supreme Court, with this other world of what we can call artificial intelligence, which is a movement within the technical culture to find an equivalence between computers and people. In both cases, there's an intellectual tradition that goes back many decades. Previously they'd been separated; they'd been worlds apart. Now, suddenly they've been intertwined.

The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture—that's been the most wealthy, prolific, and influential subculture in the technical world—that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us.

The Technium

Topic: 

  • TECHNOLOGY
https://vimeo.com/84396480

KEVIN KELLY is Senior Maverick at Wired magazine. He helped launch Wired in 1993, and served as its Executive Editor until January 1999. He is currently editor and publisher of the popular Cool ToolsTrue Film, and Street Use websites. His most recent books are Cool Tools, and What Technology Wants. Kevin Kelly's Edge Bio Page

Pages

Subscribe to RSS - TECHNOLOGY