Bonding with Your Algorithm

Bonding with Your Algorithm

Nicolas Berggruen [6.5.18]

Photo by Stefan Simchowitz.

The relationship between parents and children is the most important relationship. It gets more complicated in this case because, beyond the children being our natural children, we can influence them even beyond. We can influence them biologically, and we can use artificial intelligence as a new tool. I’m not a scientist or a technologist whatsoever, but the tools of artificial intelligence, in theory, are algorithm- or computer-based. In reality, I would argue that even an algorithm is biological because it comes from somewhere. It doesn’t come from itself. If it’s related to us as creators or as the ones who are, let’s say, enabling the algorithms, well, we’re the parents.

Who are those children that we are creating? What do we want them to be like as part of the earth, compared to us as a species and, frankly, compared to us as parents? They are our children. We are the parents. How will they treat us as parents? How do we treat our own parents? How do we treat our children? We have to think of these in the exact same way. Separating technology and humans the way we often think about these issues is almost wrong. If it comes from us, it’s the same thing. We have a responsibility. We have the power and the imagination to shape this future generation. It’s exciting, but let’s just make sure that they view us as their parents. If they view us as their parents, we will have a connection.

Investor and philanthropist NICOLAS BERGGRUEN is the chairman of the Berggruen Institute, and founder of the 21st Century Council, the Council for the Future of Europe, and the Think Long Committee for California. Nicolas Berggruen's Edge Bio Page


Some questions I'm asking myself are about humans and humanity, and some of them are more about how we live and how we co-exist. The ones about humans and humanity are, in my mind, who are we becoming as humans in the age of self-transformation, when we can modify who we are or potentially create a new species?

Thanks to AI and gene editing, we can modify who our children will be biologically, and we can create augmentation or competition to ourselves with artificial intelligence. That allows extraordinary possibilities, but also real questions. How far do we go? Are we creating a new species? What will the species be? What do we want the species to be? How do we want it to treat us or be part of us?

These are all the questions that I am intrigued by. In essence, we are the creators. We are God. And if we're God, what do we want our children to be? That’s the question that is the hardest in the long term, but it's the most significant in terms of changing the nature of humans and humanity. We’ll also have an enormous influence on the rest of the species around us. We have already, but this will provide the most powerful tools ever. Again, our influence on ourselves and on others is going to be multiplied. That’s exciting, but this is the biggest question. That’s the human question.

The other question is one of co-existence, not with what we are creating, but just today. Our democratic systems are fraying and we have to rethink them. It’s not a question of short-term political battles; it has to do with rethinking how the system and how democracy have to be restructured, in essence. The same with capitalism: Capitalism has conquered the world, but we are getting to a point where capitalism may benefit most, but in a way that’s very uneven. We have to rethink that.

In a world that, in theory, wants to cooperate, we can see that different nations, different people, and different cultures are at odds with each other. After World War II, the world came together, and institutions like the UN and more updated ideas like the G20 were trying to bring people together. Today, the world is breaking up, and you have the key nations each for themselves—the US, Europe doesn't exist as a cohesive group, India for itself, Russia for itself, China for itself, Turkey for itself. It’s becoming a harder world to manage. These are my questions beyond my own personal life.

The technological tools like gene editing or artificial intelligence are going to allow us to truly modify ourselves and the nature of who we are as humans. They’re going to be irresistible. Every corner of humanity somehow will invest in it, adopt it, use it, but in different ways. And because they’re going to be so transformative, we’re going to have to think of who we want to be and who we want these children of ourselves to be beyond what used to happen in nature, which is, we reproduce, create life, and then use this life, these children, educate them, see them as our future, but in a natural way.

The relationship between parents and children is the most important relationship. In this case, it gets more complicated because, beyond the children being our natural children, we can influence them even beyond. We can influence them biologically, and we can use artificial intelligence as a new tool. I’m not a scientist or a technologist whatsoever, but the tools of artificial intelligence, in theory, are algorithm- or computer-based. In reality, I would argue that even an algorithm is biological because it comes from somewhere. It doesn’t come from itself. If it’s related to us as creators or as the ones who are, let’s say, enabling the algorithms, well, we’re the parents.

Who are those children that we are creating? What do we want them to be like as part of the earth, compared to us as a species and, frankly, compared to us as parents? They are our children. We are the parents. How will they treat us as parents? How do we treat our own parents? How do we treat our children? We have to think of these in the exact same way. Separating technology and humans the way we often think about these issues is almost wrong. If it comes from us, it’s the same thing. We have a responsibility. We have the power and the imagination to shape this future generation. It’s exciting, but let’s just make sure that they view us as their parents. If they view us as their parents, we will have a connection.

A very Asian concept, which I’m sure is foreign to most of the people who engage with these things, is one of filial piety, the idea that children have a responsibility towards their elders. They have a duty, in essence, to their elders. That’s almost the opposite in the West. New generations are supposed to kill the old ones, just as new ideas are supposed to kill the old ones. That’s fine, but in this case, it could be dramatic. If AI superintelligence gets there, and it probably will, if gene editing is used to its full extent, the new generations can easily dispose of us. If we’re interested in surviving as a species, we have to create a bond.

~ ~ ~ ~

Theoretically, we are digital, but we’re definitely analog in the sense that we are physical. If you take away the physical side of us, I don’t think we would exist or want to exist, at least today. These delicious chocolates we just had, we wouldn’t be tempted by them, we wouldn’t enjoy them, we wouldn’t be given pleasure or hurt by them.

What will these algorithms become? They only become something, at least the way we’ve experienced it, if they become physical. What I mean by physical is everything that has to do with the world of emotions and feelings—the soft world, not just the world of the mechanics—that makes us humans, that makes animals animals, that makes any living organism participate and react to its environment.

The analog world is the world that connects us still. In the end, the digital world is a tool, but our reactions are still analog. Again, living entirely in a digital world 1) may be an illusion, and 2) we would have to recreate the analog world for it to be significant even in the digital world. At the end of the day, they’ll be one in the same. Saying the world is all digital or all analog is probably a false way to try to define things. In the end, they have to be the same. This may be very naïve. I’m not a technologist and, therefore, I’m probably wrong. Technologists will prove me wrong over time.

Is the universe entirely logical or is there randomness in our universe? Where do we come from? Where does the universe come from? Nobody seems to know. I know that mathematicians are better able to chart where we’re going. That gives a lot of power to the world of algorithms, but is it entirely predictable or is there randomness? And if there is randomness somewhere, that tells me that there’s an element of chance even with algorithms. Even if you need algorithms to create chance, to create the optionality, and even though you may know the direction of things, there’s going to be some unpredictability that is just part of the system. If that happens to be the case, then the value of the analog world—the not perfect, the not precise—will always be there. If everything was perfectly digital, perfectly explainable, then there’s no reason for the universe or the world to exist, because why would there be change? Change is about the fact that something is by definition imperfect. Why change something that’s perfect? If you can explain it entirely through an algorithm, there wouldn’t be the need for change.

The fact that there is change means one or two things: We started with something imperfect, or the algorithm is all about change. If it is all about change, fine, let’s accept it. Maybe it’s a perfect algorithm and change is the nature of the algorithm. It’s like the genie is out of the bottle: You can never reconquer it, and you can never master it. That means there is an element of unpredictability or an element that is beyond mastering the algorithm. You’ll never catch it in some ways.

The questions are so difficult and profound. Unless I am able to answer the questions myself in a way that’s credible, I can’t take anybody else’s word. I haven’t been able to get very conclusive answers from people who are on one side or the other, meaning, people who believe the world is all analog or all digital. In the case of artificial intelligence, people who are part of potentially creating superintelligence—Larry Page, Elon Musk, Demis Hassabis, and others—are the ones who are at the forefront of it. What’s interesting is that all of them think that it will happen in our lifetime. What they don’t agree on is whether it is good or bad. That’s very fuzzy today. Some think it is going to be destiny, that it will be great for humanity somehow, but they don’t know what it means. Others say the opposite, that superintelligence will be the end of our species being the dominant species on Earth. So, you have two different views.

My own feeling is they’re both right in some ways. Superintelligence means something beyond our grasp, our strengths. Where I’m not so sure they’re right is looking at it as potentially an outside species or separate agent. There will be, and there are already lots of separate agents; these are robots, but with limited capacity. An agent that is a new species that has capacity that’s quite extraordinary, not necessarily all human in terms of qualities, but with enormous powers to evolve and self-transform, that may happen. But if we create it, does it have to be separate from us? It should be part of us, as opposed to separate from us. That will be the challenge. If we are going to want to survive as a species, if we’re going to want to live well with what we create here, we’ve got to make sure that it works with us, as opposed to outside of us. If it’s outside of us, we might be in trouble. We’ll lose control of it. It may become superior in some areas that may endanger us and other species. If we make sure that it is part of us, then we have a better chance. That would be instinctively the way I would try to go.

The tools are technological, the tools have to be invested in, but let’s look at them as biological and not just algorithmic. If we look at them as part of us, part of our biology, because we’re creating them, then it becomes more interesting and potentially friendlier.

There is an idealistic notion of humans as being one. And then there are people who question that: "There are going to be winners and losers—I care about me and my family. Do I care about others?” There’s always going to be that, but anyone who is truly thoughtful about the long-term consequences, about the species itself, is going to agree that this technology or this child of humanity with extraordinary powers and extraordinary capacity is the future of our species. So, it’s not just about me, or about my sister, or my neighbor, or my enemy, it’s about all of us. It is something that we have to think about as affecting everyone that’s part of a species. It's a little bit like a god would think, “Well, what am I creating?” And if you do that, you have to expose everyone, and you have to include everyone. That's, as a creator, part of the creation of it.               

What’s interesting is that in the West, especially in the US today, you do have civil society, let’s say, private sector actors who are at the forefront of this. Government is very far behind. In general, government is always behind civil society in technology, and the gap is becoming bigger. Ultimately, these technologies are so powerful, you’re creating a new species that affects everyone, so you have no choice but to have government involved. It has to be intelligent government, it has to be a government for everyone; it shouldn’t be partisan. That’s going to be the issue of the West. The East, China, has understood this reasonably well. The government there will at some point take a position that they should be at the forefront of this. If you look at a country like France, which, historically, has had a lot of state involvement, they have also thought about this idea. Canada, which is somewhat influenced by French thinking, at least with Trudeau, is also thinking about getting involved.

One of the difficulties is that you have different speeds in different countries and cultures, but ultimately it will affect all of us. To make this productive for the world, we will need some cooperation. In the cyberspace you didn’t have it. In the cyberspace, everybody developed their own tools. You can see it’s quite messy today.

These tools—AI and gene editing—are going to be even more powerful, and if we don’t find a way to cooperate, we’ll have trouble because we’re going to have competition between individual agents, whether that means companies, or people, or nations. You need somehow to come up with standards for people to cooperate or to put, in essence, a lid or standard on whoever gets there first. That hasn’t been thought through or defined today.

I was at Asilomar, which was very interesting, and I was happy to be invited. But today, all these meetings and some of these institutes, which are asking the right questions, are run by people who are very good and get cooperation and funding from people who are also very good and who are the ones transforming a lot of this. Today, these people are almost a self-selective group, and they are ambivalent, because if they talk to government or to other cultures, let’s say the Chinese, they’re afraid they'll invite in someone who is going to make it harder for them or stifle them. On the other hand, if they don’t engage with the wider world, one or two things happen. In some cases, they’ll get shut down by governments if it gets too powerful. Or, it’s going to become such an arms race that the world will be in trouble. They know that ultimately this has to be a bigger discussion. Some of the institutes that you talked about are very aware of it. Funnily enough, the governments in the West are way behind. They're not engaged or willing to be engaged because, let’s be honest, in the West it’s political. It’s all about elections, all about very short-term gains, and these are long-term questions. I’m not sure how much governments are willing, even though they should, to engage.

Civil society, in terms of the private sector, and government are at odds in the West. That’s very unhealthy and very dangerous. There have been a lot of technology programs, from the Apollo program and others, in which government was very involved and civil society got a lot of benefits from it. There was support from one another. There was trust from civil society towards government, and government was doing something great and major.

Today, civil society distrusts government, so there is very little cooperation. That’s a problem long term. A lot of people say it's better that way. That way civil society can develop the technological tools. There are some good actors, bad actors, naïve actors, and unless you have some cooperation with government that has to look at the long-term benefit for all of society, the risk is too high.

In the East, take China as an extreme example, and I’m not saying China is a good example for us, but you have incredible cooperation between government and the private sector. You just have it. There’s no other way and, therefore, they may just be stronger than us because we’re divided.

President XI & Nicolas Breggruen, Beijing, November, 2015

Going back to the AI world, if you ask Garry Kasparov whether the greatest chess player is a human or a machine, well, we know it's the machine, but he’ll always tell you that the machine and the human together are going to be better than the machine alone. It’s a little bit the same way here in terms of society. Government and civil society working together are going to be stronger than the two on their own.

For a period of a time one is going to get ahead of the other. It’ll look good in terms of private sector. The private sector in the West is ahead of the Chinese for sure, but they may catch up because of the cooperation between government and civil society. We are not able today in the West to get these two to work together. The distrust is enormous between the two. It’s very unhealthy, especially with these subjects.

Enormous funding from some of these tech companies or tech entrepreneurs, in theory, is going to put the West at an advantage because they're more agile, they're freer. But at the end of the day, I’m not sure. If you’re at odds with society, I’m not sure on things that are so important that you’re going to survive or win.

There is some pure research being done or trying to be done, but again, in the West the temptation is that the best engineers and the best minds go where the most money is. The gadgets win because they’re useful commercially. Are there going to be enough people willing to work on the tools for science alone? Probably, but maybe not enough. Even though China today is not very different from America in terms of being an ultra-capitalist society, culturally, they have this dedication to a community surviving beyond individuals, the nation surviving, and the party surviving politically. Of all these things you get better talent in the non-commercial sector. In the West it’s very hard.

~ ~ ~ ~

In a world where technology is so exciting and so transformative, where globalization means multiculturalism, you still have to go back to questions about humanity, about who we are as humans and where we are going—ethical questions, moral questions, just very basic human questions. All these questions that philosophers ask are important and very refreshing because it’s another way of asking these same questions in a non-technical way. If you think of what has changed us as humans over thousands of years, in my mind it’s ideas; it’s a conception of the world. And the conceptions of the world came from different origins, but they always came from thinkers. Those may have been religious thinkers, messiahs, or they could have been just normal thinkers who then became gods of some kind.

If you think about who we’ve become over the thousands of years that we know in terms of history, it always came from ideas, it always came from people. If you think of our world in the West, it came, frankly, from one book: the Bible. These are all conceptions of the world. Jesus was, in essence, a thinker who delivered a message. Interestingly, others at the same time delivered similar messages, but his got picked up and became our world, the world of individuals. At the other side of the world, people like Laozi, or Confucius, or Buddha, who are now religious figures, but they were thinkers who shaped their world and the world that has influenced us, too. At the end of the day, the implications were very often philosophical. Thinkers like Rousseau, or Nietzsche, or Karl Marx, in terms of how society functions, they have had more influence on our lives than probably anyone else.

Rewarding and engaging with philosophers in a broad sense, in the way that philosophy was thought of by the Greeks 2,000 years ago, to me, is very relevant, especially at a time when the world is culturally messy. You’ve got the world divided against itself, nations divided against themselves. You have to go back to simple and fundamental questions that philosophers ask.

The world that is imagined by people that we call philosophers, the world of ideas, will still influence us more than anything else. Why? It’s a conception of the world. If you change the mindset, you change everything. The mindset that created the idea of democracy, the mindset that created the individual versus the community, or the opposite—these are such fundamental ways of thinking of ourselves and how we function. This has changed everything. Very fundamental concepts about who we should be and who we are as humans have ultimately more influence on us, our societies, and how we live than anything else.

The idea of equality is a fairly new idea. The idea of men and women being equal, the idea of potentially animals having some value—these are highly conceptual and they were created over time and they make all the difference in our lives. These are fairly fresh ideas. We don’t realize them until a few generations after. Very often this is a challenge. Very often great thinkers or very important thinkers, their ideas, good or bad, only become popular or part of the mainstream sometimes generations after.

The most influential thinkers were deeply unpopular: Socrates was poisoned, Jesus was crucified, Confucius was exiled, Karl Marx, same. The most influential people who have changed the nature of our lives were always unpopular, but why? It's because they came up with things that were deeply difficult for us to absorb. Change is always hard.

Our challenge today is no different than 2,000 years ago, or 1,000 years, or fifty years ago: It’s to have the courage to provoke ourselves to come up with really different concepts. By nature, as humans we have to change, and to change we need new ideas. This is our work at the Institute. If we’re good at the Institute we’ll, hopefully, enable some new ideas that may not be popular, but will be significant—a little bit like technology, where you have to go towards things that are potential failures or potentially unpopular. We have to do the same thing in the world of ideas.

New ways of thinking have always and will continue to allow us to change and evolve. They have to be deep and different ideas. The world of traditional academic philosophy may not give the answers, but somebody will give the answers. Are they philosophers by definition? Potentially yes, potentially no.

Our philosophy prize has been awarded so far two times because it was created two years ago, in both cases to philosophers. I expect that in the future it will be given to people who are not technically philosophers. They may be thinkers outside of philosophy. Basically, rewarding people who have ideas, people who come up with, let’s call it, transformative thinking.

If you look back at history, the ideas that were created maybe 2,000 years ago, frankly, still shape our world entirely. We have been created by ideas that were created 2,000, 2,500 years ago, in the East and West. The transformations that happened over the last 2,000 years were, again, enabled by thinkers. And there will be thinkers again. Where do they come from? Are they philosophers? Are they this or that? We can call them whatever we want, but they’re going to be thinkers, and they’re going to be thinkers that exist today that we haven’t heard of or maybe some that we’ve heard of whose influence will only come after a period of time. Our human progress is very dependent on great thinkers.

The winner of our prize last year was Onora O’Neill. There is a nine-member jury, of which I'm not a part of, that selects the winner. She’s an interesting choice because her two main areas of interest, in my mind, are important contemporary questions. I’m not saying she came up with the answers, but the questions are relevant. One is ethics and technology—what we’ve been talking about. Technology has huge implications, ethically, and the connection between these two is more important today with gene editing and AI becoming so powerful. The fact that she’s focused on that is relevant.

She also talks about the idea of trustworthiness, which is also very interesting in a time where trust is in question. Her point is not just about whether something is true or not true; it's about the origin of information. Can you trust the origin? Who’s trustworthy and who is not? Who can you believe and who can you not believe? Who’s got an incentive or not?

Going back to the question of this DeepMind of Google or Facebook or Baidu in China: What are their motivations? Where do they come from? They’re going to create tools that are going to transform us. Can you trust them? Can you trust the government? Which government? Who? Is it somebody who’s elected in the West, or is it someone who’s appointed in the East? They might be our shepherds in terms of technology. They might be our shepherds in terms of governance. Can you trust them? In that sense, Onora was an interesting choice because it’s very contemporary.

What’s interesting about Xi and China in general is that because of the political system, but also very much because of the culture, there is an idea that they’re part of history, that they’re going to create history, and that they’re part of something much bigger and longer than just themselves or their position today. Somebody like Xi or anyone in China with real power feels that they have an enormous responsibility, and the responsibility is not a temporary responsibility; it’s not ideological, it’s practical, but very long term, which is very hard for us to understand in the West.

In the West we tend to be pretty ideological and pretty short term, which is almost the opposite. The first meeting I had with Xi I thought was pretty interesting. He started a long exposé about China, and his responsibility, and the responsibility of the Chinese Government, and how the party has really a responsibility to 5,000 years of history. You don’t ever get that in the West. The idea that you’re here to support a civilization, that you’re by definition part of something that’s 5,000 years old, is a very different perspective than what we have in the West.

Long-term thinking, deep thinking, responsibility making sure that the whole functions—it's almost the opposite of the West. In the West, we’re willing to sacrifice the majority for the minority. China is willing to sacrifice the minority for the majority. And when I say majority, I mean the vast majority. Is it fair? Is it cruel? It is whatever it is, but it’s exactly the opposite mindset.

Xi is almost the best embodiment of this. That’s maybe why he was chosen, and why he’s choosing himself to take that role. You can feel it when you meet with him. It’s in him and he feels that responsibility, which is one that Chinese society has to function. The system has to continue. He’s got a responsibility towards his people in China. China has to have a place in the world again. For them it’s a place of respect, of prosperity, so they have a real vision of where they should be. Somebody like him feels that that’s his mission.

~ ~ ~ ~

I want to put philosophy back in center stage as an idea—the fact that the world of ideas is so important, not just all the other areas that are being rewarded.