2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

Catalyst, Information Technology Startups, EDventure Holdings, Former Chariman,Electronic Frontier Foundation and ICANN; Author: Release 2.1
Thinking Aloud About Thinking Machines—Chemistry Vs. Electronics

 

I'm thinking about the difference between artificial intelligence and artificial life. AI is smart and complicated and generally predictable by another computer (at some sufficient level of generality even if you allow for randomness). Artificial life is unpredictable and complex; it makes unpredictable mistakes that mostly are errors, but that sometimes show flashes of genius or stunning luck.

The real question is what you get when you combine the two.... awesome brute intelligence and memory and resistance to fatigue—plus the genius and the drive to live that somehow causes the intelligence to jump circuits with unpredictable results. Will we need to feed our machines the electronic equivalent of psychoactive drugs and the body's own hormones/chemicals to produce leaps of creative insight (as opposed to mere brilliance).

If you are alive, you must face the possibility of being dead. But if you are AI/AL in a machine, perhaps not.

What would an immortal, singularity-level intelligence be like? If it were somehow kind and altruistic, how could we let humanity stand in its way? Let's just cede the planet to it politely and prepare to live in a pleasant zoo tended by the AI/AL, since someday it will figure out how to cover the entire solar system and use the sun for fuel anyway.

So much of what defines us is constraints... most notably, death. Being alive implies the possibility of death. (And abundance, it turns out, is leading us to counterproductive behavior—such as too much food and short-term pleasure on the one hand, and too little physical activity on the other.)

But if it were immortal, why should it have any instinct to altruism, to sharing... or even to reproducing as opposed to simply growing. Why would it expend its own limited resources on sustaining others—except in carefully thought-out rational transactions? What will happen when it no loger needs us? What would motivate it?

If it could live for ever, would it be lazy, thinking it could always do things later on? Or instead, would it be paralyzed by fear of regret? Whatever mistakes it makes, it wil live with them forever. What is regret for a potentially immortal being, with eternity to put things right?