It's Not About The Human Species: It's About Civilizations

 

What should machines that think actually do? Analyze data, understand feelings, generate new machines, make decisions without human intervention. In order to think about machines that think, we should be able to start from experience. Here is an example.

On Monday, October 19, 1987, a wave of sales in stock exchanges originated in Hong Kong, crossed Europe and hit New York, causing the Dow Jones to drop by 22%. Black Monday was one of the biggest crashes in the history of financial markets, and there was something special about it. For the first time, according to most experts, computers were to blame for the financial crash: algorithms were deciding when and how much to buy and sell in the stock exchange. Computers were supposed to help traders so that they could minimize risks, but they were in fact moving all in the same direction, enhancing risks instead. There was a lot of discussion about stopping automated trading, but it didn't happen.

On the contrary: after the dot-com crisis of March, 2000, machines have been used more and more to make sophisticated decisions in the financial market. Machines are now calculating all kinds of correlations between incredible amounts of data: they analyze emotions that people express on the Internet by understanding the meaning of their words, they recognize patterns and forecast behaviors, they are allowed to autonomously choose trades, they create new machines—software called "derivatives"—that no reasonable human being could possibly understand.

An artificial intelligence is coordinating the efforts of a sort of collective intelligence, operating thousands times faster than human brains, with many consequences for human life. The first signs of the latest crisis occurred in America in August, 2007, and has had terrible consequences in affecting the lives of people in Europe and elsewhere. Real people suffered immensely for those decisions. Andrew Ross Sorkin in his book Too Big to Fail shows how even the most powerful bankers didn't have any power in the midst of the crisis. No human brain seemed to be able to control and change the course of events to prevent the crash that was going to happen.

Can we take this example to learn how to think about machines that think?

These machines are actually very much autonomous in understanding their context and taking decisions. And they are controlling vast dimensions of human life. Is this the beginning of a post-human era? No: these machines are very much human. They are made by designers, programmers, mathematicians, some economists and some managers. But are they just another tool, to be used for good or for bad by humans? No: in fact those people have little choice, they make those machines without thinking at the consequences, they are just serving a narrative. Those machines are in fact shaped by a narrative that's be challenged by very few people.

According to that narrative the market is the best way to allocate resources, no political decision can possibly improve the situation, and risk can be controlled while profits can grow without limits and banks should be allowed to do whatever they want. There is only one goal and one measure of success: profit.

Machines didn't invent the financial crisis, as the 1929 stock market crash reminds us. Without machines nobody could deal with the complexity of modern financial markets. The best artificial intelligences are those that are made thanks to the biggest investments and by the best minds. They are not controlled by any one individual, they are not designed by any one responsible person: they are shaped by the narrative and make the narrative more effective. And this particular narrative is very narrow-minded.

If only profit counts, then externalities don't count: cultural, social, environmental externalities are not the problem of financial institutions. Artificial intelligences that are shaped by this narrative will create a context in which people don't feel any responsibility. An emerging risk: that those kind of machines are so powerful and fit so well in the narrative that reduces the probability to question the big picture, that make us less likely to look things from a different angle...that is, until the next crisis.

This kind of story is very easily going to apply to different matters. Medicine, ecommerce, policy, advertising, national and international security, even dating and sharing are territories in which the same genre of artificial intelligence systems are starting to work: they are shaped according to a generally very focused narrative, they tend to reduce human responsibility and overlook externalities. They reinforce the prevailing narrative. What will medical artificial intelligence do? Will it be shaped by a narrative that wants to save lives or to save money?

What do we learn from this? We learn that artificial intelligence is human and not post-human, and that humans can ruin themselves and their planet in very many ways, artificial intelligence being not the most perverse way.

Machines that think are shaped by the way humans think and by what humans don't think about deeply enough: all narratives give light to something and forget other things. Machines react and find answers in a context, reinforcing the frame. But asking fundamental questions is still a human function. And humans never stop asking questions. Even when those questions that are not coherent with the prevailing narrative.

Machines that think are probably indispensable in a world of growing complexity. But there will always be a plurality of narratives to shape them. As in natural ecosystems, a monoculture is a fragile while efficient solution, also in cultural ecosystems, a single line of thought will generate efficient but fragile relations between humans and their environment, whatever artificial intelligences they are able to build. Diversity in ecosystems and plurality in the dimensions in human history are the sources of those different problems and questions that generate richer outcomes.

To think about machines that think, means to think about the narrative that shapes them: and if new emerging narratives are going to come from an open, ecological approach, if they will be able to grow in a neutral network, they will shape the next generation of artificial intelligences, too, in a plural, diverse way, helping humans understand externalities. Artificial intelligence is not going to challenge humans as a species: it will challenge their civilizations.