w_daniel_hillis's picture
Physicist, Computer Scientist, Co-Founder, Applied Invention.; Author, The Pattern on the Stone
I Think, Therefore AI

Machines that think will think for themselves. It is in the nature of intelligence to grow, to expand like knowledge itself.

Like us, the thinking machines we make will be ambitious, hungry for power—both physical and computational—but nuanced with the shadows of evolution. Our thinking machines will be smarter than we are, and the machines they make will be smarter still. But what does that mean? How has it has worked so far? We have been building ambitious semi-autonomous constructions for a long time—governments and corporations, NGOs. We designed them all to serve us and to serve the common good, but we are not perfect designers and they have developed goals of their own. Over time the goals of the organization are never exactly aligned with the intentions of the designers.

No intelligent CEO believes his or her corporation efficiently optimizes the benefit of its shareholders. Nor do governments work relentlessly in the interests of their citizens. Democracies serve corporations more effectively than they serve individuals. Still, our organizations do continue to serve us, they just do so imperfectly. Without them, we literally could not feed ourselves, at least not all 7 billion of us.  Nor could we build a computer, or conduct a worldwide discussion about intelligent machines. We have come to depend on the power of the organizations that we have constructed, even though they has grown beyond our capacity to fully understand and control. Thinking machines are going to be like that, only more so. Our environmental, social, and economic problems are as daunting as the concept of extinction. Our thinking machines are more than metaphors. The question is not will they be powerful enough to hurt us (they will), or whether they will always act in our best interests (they won’t), but whether over the long term they can help us find our way—where we come out on the panacea/apocalypse continuum.

I’m talking about smart machines that will design even smarter machines: the most important design problem in all of time. Like our biological children, our thinking machines will live beyond us. They need to surpass us too, and that requires designing into them the values that make us human.  It is a hard design problem and it is important that we get it right.