TECHNOLOGY

Engines of Evidence

[10.24.16]

A new thinking came about in the early '80s when we changed from rule-based systems to a Bayesian network. Bayesian networks are probabilistic reasoning systems. An expert will put in his or her perception of the domain. A domain can be a disease, or an oil field—the same target that we had for expert systems. 

The idea was to model the domain rather than the procedures that were applied to it. In other words, you would put in local chunks of probabilistic knowledge about a disease and its various manifestations and, if you observe some evidence, the computer will take those chunks, activate them when needed and compute for you the revised probabilities warranted by the new evidence.

It's an engine for evidence. It is fed a probabilistic description of the domain and, when new evidence arrives, the system just shuffles things around and gives you your revised belief in all the propositions, revised to reflect the new evidence.         

JUDEA PEARL, professor of computer science at UCLA, has been at the center of not one but two scientific revolutions. First, in the 1980s, he introduced a new tool to artificial intelligence called Bayesian networks. This probability-based model of machine reasoning enabled machines to function in a complex, ambiguous, and uncertain world. Within a few years, Bayesian networks completely overshadowed the previous rule-based approaches to artificial intelligence.

Leveraging the computational benefits of Bayesian networks, Pearl realized that the combination of simple graphical models and probability (as in Bayesian networks) could also be used to reason about cause-effect relationships. The significance of this discovery far transcends its roots in artificial intelligence. His principled, mathematical approach to causality has already benefited virtually every field of science and social science, and promises to do more when popularized. 

He is the author of Heuristics; Probabilistic Reasoning in Intelligent Systems; and Causality: Models, Reasoning, and Inference. He is the winner of the Alan Turing Award. Judea Pearl's Edge Bio Page 

Is Big Data Taking Us Closer to the Deeper Questions in Artificial Intelligence?

[5.4.16]

What we need to do in artificial intelligence is turn back to psychology. Brute force is great; we're using it in a lot of ways, like speech recognition, license plate recognition, and for categorization, but there are still some things that people do a lot better. We should be studying human beings to understand how they do it better.

People are still much better at understanding sentences, paragraphs, books, and discourse where there's connected prose. It's one thing to do a keyword search. You can find any sentence you want that's out there on the web by just having the right keywords, but if you want a system that could summarize an article for you in a way that you trust, we're nowhere near that. The closest thing we have to that might be Google Translate, which can translate your news story into another language, but not at a level that you trust. Again, trust is a big part of it. You would never put a legal document into Google Translate and think that the answer is correct.

GARY MARCUS is CEO and founder, Geometric Intelligence; professor of psychology, New York University; author, Guitar Zero: The New Musician and the Science of Learning. Gary Marcus's Edge Bio Page

Is Big Data Taking Us Closer to the Deeper Questions in Artificial Intelligence?

Topic: 

  • TECHNOLOGY
https://vimeo.com/156849301

In artificial intelligence we need to turn back to psychology. Brute force is great. We're using it in a lot of ways like in speech recognition, license plate recognition, and for categorization, but there are still some things that people do a lot better. We should be studying human beings to understand how they do it better.

AI & The Future Of Civilization

Topic: 

  • TECHNOLOGY
https://vimeo.com/153702764

The question is, what makes us different from all these things? What makes us different is the particulars of our history, which gives us our notions of purpose and goals. That's a long way of saying when we have the box on the desk that thinks as well as any brain does, the thing it doesn't have, intrinsically, is the goals and purposes that we have. Those are defined by our particulars—our particular biology, our particular psychology, our particular cultural history.

AI & The Future Of Civilization

[3.1.16]


What makes us different from all these things? What makes us different is the particulars of our history, which gives us our notions of purpose and goals. That's a long way of saying when we have the box on the desk that thinks as well as any brain does, the thing it doesn't have, intrinsically, is the goals and purposes that we have. Those are defined by our particulars—our particular biology, our particular psychology, our particular cultural history.

The thing we have to think about as we think about the future of these things is the goals. That's what humans contribute, that's what our civilization contributes—execution of those goals; that's what we can increasingly automate. We've been automating it for thousands of years. We will succeed in having very good automation of those goals. I've spent some significant part of my life building technology to essentially go from a human concept of a goal to something that gets done in the world.

There are many questions that come from this. For example, we've got these great AIs and they're able to execute goals, how do we tell them what to do?...

STEPHEN WOLFRAM, distinguished scientist, inventor, author, and business leader, is Founder & CEO, Wolfram Research; Creator, Mathematica, Wolfram|Alpha & the Wolfram Language; Author, A New Kind of Science. Stephen Wolfram's Edge Bio Page

THE REALITY CLUB: Nicholas Carr, Ed Regis

ED. NOTE: From an unsolicited email: "For me, watching the video in small bites gave me the same thrill as reading JJ Ulysses I looked at the screen and clapped aloud." 

The Next Wave

[7.16.15]

This can't be the end of human evolution. We have to go someplace else.                                 

It's quite remarkable. It's moved people off of personal computers. Microsoft's business, while it's a huge monopoly, has stopped growing. There was this platform change. I'm fascinated to see what the next platform is going to be. It's totally up in the air, and I think that some form of augmented reality is possible and real. Is it going to be a science-fiction utopia or a science-fiction nightmare? It's going to be a little bit of both.                              

JOHN MARKOFF is a Pulitzer Prize-winning journalist who covers science and technology for The New York Times. His most recent book is the forthcoming Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots. John Markoff's Edge Bio Page


THE NEXT WAVE

I'm in an interesting place in my career, and it's an interesting time in Silicon Valley. I grew up in Silicon Valley, but it's something I've been reporting about since 1977, which is this Moore's Law acceleration. Over the last five years, another layer has been added to the Moore's Law discussion, with Kurzweil and people like him arguing that we're on the brink of self-aware machines. Just recently, Gates and Musk and Hawking have all been saying that this is an existential threat to humankind. I simply don't see it. If you begin to pick it apart, their argument and the fundamental argument of Silicon Valley, it's all about this exponential acceleration that comes out of the semiconductor industry. I suddenly discovered it was over.                

Now, it may not be over forever, but it's clearly paused. All the things that have been driving everything that I do, the kinds of technology that have emerged out of here that have changed the world, have ridden on the fact that the cost of computing doesn't just fall, it falls at an accelerating rate. And guess what? In the last two years, the price of each transistor has stopped falling. That's a profound moment. 

Existential Risk

[4.16.15]

The reasons why I'm engaged in trying to lower the existential risks has to do with the fact that I'm a convinced consequentialist. We have to take responsibility for modeling the consequences of our actions, and then pick the actions that yield the best outcomes. Moreover, when you start thinking about—in the pallet of actions that you have—what are the things that you should pay special attention to, one argument that can be made is that you should pay attention to areas where you expect your marginal impact to be the highest. There are clearly very important issues about inequality in the world, or global warming, but I couldn't make a significant difference in these areas. 

JAAN TALLINN is a co-founder of The Centre for the Study of Existential Risk at University of Cambridge, UK as well as The Future of Life Institute in Cambridge, MA. He is also a founding engineer of Kazaa and Skype. Jaan Tallinn's Edge Bio Page

INTRODUCTION
by Max Tegmark

I find Jaan Tallinn remarkable in more ways than one. His rags-to-riches entrepreneur story is inspiring in its own right, starting behind the Iron Curtain and ending up connecting the world with Skype. How many times have you skyped? How many people do you know who created a new verb?

Most successful entrepreneurs I know went on to become serial entrepreneurs. In contrast, Jaan chose a different path: he asked himself how he could leverage his success to do as much good as possible in the world, developed a plan, and dedicated his life to it. His ambition makes even the goals of Skype seem modest: reduce existential risk, i.e., the risk that we humans do something as stupid as go extinct due to poor planning. 

Already after a few short years, Jaan’s impact is remarkable. He is a key supporter of a global network of non-profit existential risk organizations including The Future of Humanity Institute, The Machine Intelligence Research Institute, The Global Catastrophic Risk Institute, The Centre for the Study of Existential Risk at University of Cambridge, and The Future of Life Institute, the last two of which he co-founded.

I’ve had the pleasure to work with him on The Future of Life Institute from day one, and if you’ve heard of our recent conference, open letter and well-funded research program on keeping artificial intelligence beneficial, then I’d like to make clear that none of this would have happened if it weren’t for Jaan’s support. If you’re an intelligent life form reading this text millions of years from now and marvel at how life is flourishing, you may owe your existence to Jaan.

MAX TEGMARK is a Physicist, MIT; Researcher, Precision Cosmology; Founder, Future of Life Institute; Author, Our Mathematical Universe. Max Tegmark's Edge Bio Page


EXISTENTIAL RISK

I split my activity between various organizations. I don't have one big umbrella organization that I represent. I use various commercial organizations and investment companies such as Metaplanet Holdings, which is my primary investment vehicle,to invest in various startups, including artificial intelligence companies. Then I have one nonprofit foundation called Solenum Foundation that I use to support various so-called existential risk organizations around the world.

Existential Risk

Topic: 

  • TECHNOLOGY
https://vimeo.com/124955878

The reasons why I'm engaged in trying to lower the existential risks has to do with the fact that I'm a convinced consequentialist. We have to take responsibility for modeling the consequences of our actions, and then pick the actions that yield the best outcomes. Moreover, when you start thinking about—in the pallet of actions that you have—what are the things that you should pay special attention to, one argument that can be made is that you should pay attention to areas where you expect your marginal impact to be the highest.

Digital Reality

Topic: 

  • TECHNOLOGY
https://vimeo.com/117833793

...Today, you can send a design to a fab lab and you need ten different machines to turn the data into something. Twenty years from now, all of that will be in one machine that fits in your pocket. This is the sense in which it doesn't matter. You can do it today. How it works today isn't how it's going to work in the future but you don't need to wait twenty years for it. Anybody can make almost anything almost anywhere.              

Pages

Subscribe to RSS - TECHNOLOGY