Centuries ago, some philosophers began to see the human mind as a mechanism, a notion that (unlike the mechanist interpretation of the universe) is hotly contested until this day. With the formalization of computation, the mechanist perspective received a new theoretical foundation: the notion of the mind as an information-processing machine provided an epistemology and methods to understand the nature of our mind by recreating it. Sixty years ago, some of the pioneers of the new computational concepts got together and created Artificial Intelligence (AI) as a new discipline to study the mind.
AI has probably been the most productive technological paradigm of the information age, but despite an impressive string of initial successes, it failed to deliver on its promise. It turned into an engineering field, creating useful abstractions and narrowly focused applications. Today, this seems to have changed again. Better hardware, novel learning and representation paradigms inspired by neuroscience and incremental progress within AI itself have led to a slew of landmark successes. Breakthroughs in image recognition, data analysis, autonomous learning and the construction of scalable systems have led to applications that seemed impossible a decade ago. With renewed support from private and public funding, AI researchers now turn towards systems that display imagination, creativity, intrinsic motivation, and might acquire language skills and knowledge in similar ways as humans. The discipline of AI seems to have come full circle.
The new generation of AI systems is still far from being able to replicate the generality of human intelligence, and in my view, it is hard to guess how long that is going to take. But it seems increasingly clear that there is no fundamental barrier on the way to human-like intelligent systems. We have started to pry the mind apart into a set of puzzle blocks, and each part of the puzzle looks eminently solvable. But if we put all these blocks together into a comprehensive, working model, we won't just end up with human-like intelligence.
Unlike biological systems, technology scales. The speed of the fastest birds did not turn out to be a limit to airplanes, and artificial minds will be faster, more accurate, more alert, more aware and comprehensive than their human counterparts. AI is going to replace human decision makers, administrators, inventors, engineers, scientists, military strategists, designers, advertisers and of course AI programmers. At this point, Artificial Intelligences can become self-perfecting, and radically outperform human minds in every respect. I do not think that this is going to happen in an instant (in which case it only matters who has got the first one). Before we have generally intelligent, self-perfecting AI, we will see many variants of task specific, non-general AI, to which we can adapt. Obviously, that is already happening.
When generally intelligent machines become feasible, implementing them will be relatively cheap, and every large corporation, every government and every large organisation will find itself forced to build and use them, or be threatened with extinction.
What will happen when AIs take on a mind of their own?
Intelligence is a toolbox to reach a given goal, but strictly speaking, it does not entail motives and goals by itself. Human desires for self-preservation, power and experience are the not the result of human intelligence, but of a primate evolution, transported into an age of stimulus amplification, mass-interaction, symbolic gratification and narrative overload. The motives of our artificial minds are (at least initially) going to be those of the organisations, corporations, groups and individuals that make use of their intelligence. If the business model of a company is not benevolent, then AI has the potential to make that company truly dangerous. Likewise, if an organisation aims at improving the human condition, then AI might make that organisation more efficient in realizing its benevolent potential.
The motivation of our Artificial Intelligences will stem from the existing building blocks of our society; every society will get the AI it deserves.
Our current societies are not well-designed in this regard. Our modes of production are unsustainable, our resource allocation wasteful, and our administrative institutions are ill-suited to address these problems. Our civilization is an aggressively growing entropy pump that destroys more at its borders than it creates at its center.
AI can make these destructive tendencies more efficient, and thus more disastrous, but it could equally well help us to solve the existential challenges of our civilisation. I think that building benevolent AI is closely connected to the task of building a society that supplies the right motivations to its building blocks. The advent of the new age of thinking machines may force us to fundamentally rethink our institutions of governance, allocation and production.