Reality is an Activity of the Most August Imagination

Reality is an Activity of the Most August Imagination

Tim O'Reilly [10.2.17]

Wallace Stevens had an immense insight into the way that we write the world. We don't just read it, we don't just see it, we don't just take it in. In "An Ordinary Evening in New Haven," he talks about the dialogue between what he calls the Naked Alpha and the Hierophant Omega, the beginning, the raw stuff of reality, and what we make of it. He also said “reality is an activity of the most august imagination.”

Our job is to imagine a better future, because if we can imagine it, we can create it. But it starts with that imagination. The future that we can imagine shouldn't be a dystopian vision of robots that are wiping us out, of climate change that is going to destroy our society. It should be a vision of how we will rise to the challenges that we face in the next century, that we will build an enduring civilization, and that we will build a world that is better for our children and grandchildren and great-grandchildren. It should be a vision that we will become one of those long-lasting species rather than a flash in the pan that wipes itself out because of its lack of foresight.

We are at a critical moment in human history. In the small, we are at a critical moment in our economy, where we have to make it work better for everyone, not just for a select few. But in the large, we have to make it better in the way that we deal with long-term challenges and long-term problems.

TIM O'REILLY is the founder and CEO of O'Reilly Media, Inc., and the author of WTF?: What’s the Future and Why It’s Up to Us. Tim O'Reilly's Edge Bio page

REALITY IS AN ACTIVITY OF THE MOST AUGUST IMAGINATION

The thing I've been struggling with is understanding the relationship of technology and the economy. There's a narrative today about AI eliminating human jobs, and it's pretty clear to me, based on history, that it's wrong. History teaches us that if we use technology correctly, we increase productivity. The fundamental questions that we're facing today are not about how technology will inevitably put people out of work, they're questions about how to distribute the fruits of that productivity, and what we have to do differently in order to get a different outcome than the one we’re facing now.

We seem to be in the throes of technological determinism. The future is determined by the choices we make. If you look at the history of how we've dealt with past technological revolutions, there's been a social conscience that arose where we decided to change the way our society works. 

I'm trying to figure out how to change the rules of the game and get people to think differently about the future. It's pretty clear to me that there is plenty of work to be done that technology can help us with, huge problems to be solved. What's keeping us from putting today's technology to work on those problems and instead forcing us to spend time on so much triviality? In particular, I'm thinking a lot about the kind of advice I as a technologist could give to policymakers, people in Washington, or Brussels, or China—to say, "Here's what you ought to be doing; here's what the real path of technology teaches us; here are the choices that you should be setting up for our society; this is the kind of leadership that you should be exerting." 

One of the things that I've spent my life doing is engaging with computer platforms. I began my career in the age that was dominated by Microsoft; it was just throwing off the bonds of the IBM monopoly. I became strongly associated with the open-source software movement of the early commercial Internet. I was looking at this conflict of worldviews in a company that started out creating a lot of value in the computer industry, and then increasingly started to capture that value for itself.

I remember a conversation I had with Walt Mossberg in which he recounted a conversation he had with Steve Ballmer who, at the times, was the CEO of Microsoft. He told me he’d said, "Steve, if you guys would be 5 percent less greedy, the world would like you 100 percent more." I'm now watching that dynamic play out again with Google where, despite their "don't be evil" philosophy, they're becoming the focus of antitrust investigations.

I look at these patterns in platforms in which they start out with a burst of optimism and creation of public value, then gradually they start turning away from that. I'm trying to understand why that is. How do we build long-lasting companies that create a balance between the company or the platform and its ecosystem? Microsoft lost leadership because they had taken away the opportunities for their developer ecosystems, so those developers went over to the Internet and to Google. Now, we see this same thing playing out again.

It seems to me that the same pattern we've seen with technology platforms is playing out today in our broader economy, where financial markets in particular have turned into an extractive monopoly rather than a support system for our economy. Something like 85 percent of all corporate "investment" today goes to dividends and share buybacks. Very little goes to actual investment in people, building things, and R&D. It's all going into financial gamesmanship.

Looking at the pattern of algorithmic systems like Google and Facebook, I started thinking about how that applies to financial markets. A system like Google has hundreds of factors that are being taken into account, but they all have a master objective function, a fitness function, which is relevance in search results and ads.

Facebook's fitness function, their objective function, is to produce content that's engaging. We saw how that went wrong with fake news. Mark and the Facebook team are trying to deal with that. They're wrestling with the fact that they had this idea about how to build an engaging product, and it's been subverted.

I look at our economic system where inequality is increasing, and I ask myself: Is that also something of a system that's dominated by an algorithm? What are they trying to optimize for?

I started to realize that thirty or forty years ago was the point we told companies that there's only one thing to optimize for, and that is shareholder value. That's the point in the '70s where you see this great divergence between the increased productivity brought on by technology and the actual benefit to the economy, where you see inequality soar, where you see people not doing as well as their parents—all this work that Raj Chetty has talked about. I came around to thinking that in some ways financial markets are that rogue AI that people like Elon Musk have been talking about.

Nick Bostrom and Elon Musk and everybody use these artificial thought experiments, like the AI which follows its objective function of making paperclips. Musk used one recently about the strawberry-picking robot that decides humans are in the way of its picking strawberries. Those aren't realistic. What is realistic is a world in which you have an increasingly algorithmic financial system saying, "Hey, optimize for corporate profit because it drives stock price. Never mind what happens to the people. Never mind what happens to society." We're in that AI-driven situation.

What do we do about that? How do we debug the objective function of an increasingly automated economy, of an economy that's dominated by systems that are, to use a great quote from Wallace Stevens, "without human feeling, without human meaning, a foreign song"? We're living in a world which is dominated by a system that disregards human value.

Look at fake news on Facebook, for example, which is something that Mark is wrestling with right now. He wants to figure out how Facebook reinforces true social bonds, as opposed to the fake bonds. There's a bunch of work that's going on at Facebook to figure that out. A great quote comes to mind from a guy named Andrew Singer. Many years ago he said to me, "The art of debugging a computer program is to figure out what you really told the computer to do instead of what you thought you told it to do." Right now, Facebook is engaged in this struggle, as is Google.

We have to do the same thing in our society. We had a theory that if we optimized for shareholder value and corporate profit, and if we aligned the interest of shareholders and the interests of management, companies would prosper and the economy would prosper. Now, thirty or forty years on, we're looking at it and realizing it didn't quite work out that way. So what do we replace it with?

One of the things I've been talking a lot about is "we." You might be wondering who this "we" is. There are a couple of key ideas that I'm wrestling with, but I don't think I have the answers to them. The first one, and probably the most important one, is the collective "we" of what we collectively believe. Ultimately, what politicians do and what we do as a society is a result of what billions of people come to believe about the way the world is and what's right.

If you think back to the Middle Ages, everybody believed in the divine right of kings. Right now, everybody believes in the divine right of capital. It's only natural that the owners of businesses and the owners of capital should try to extract as much as possible for themselves and leave society in the lurch, and that they should basically treat people as a cost to be eliminated. We accept that. When I say "we," I mean all of us accept it. We simply believe that that's the way the world works. We ignore things that run contrary to that.

There are these isolated companies that are playing by totally different rules. A co-op like REI is a good example. REI outperforms all of their public market competitors in measures of traditional, “real” market activity: Their same‑store sales growth is higher, their revenue per store is higher, and they pay more to their employees. They're not very profitable, and they don't have a stock that financial markets can bet on. They just vanish from the world because they give all of their profits back to their customers in the form of dividends. The Green Bay Packers, a storied football team, are owned by the fans. The fans use their ownership to keep season ticket prices low. They're not any less successful than other companies. In fact, they may be more successful, but they just vanish from the narrative.

Jeff Bezos once told a story at one of my conferences, which apparently he attributed to Danny Hillis. Danny said that collective intelligence is that thing that decided that orange meant decaffeinated coffee. During World War II, Sanka, which was making decaffeinated coffee, gave out orange coffee pots to diners, and then orange became a symbol for decaffeinated coffee.

It's a meme. Of course, everybody now thinks a meme is a picture and some text on the Internet, but we know from Richard Dawkins’ original definition that a meme is simply an idea that spreads, in the same way that a gene reproduces itself. Our world is full of ideas that spread. Spreading new ideas about the way the world could be different is a huge job for all of us. We have a sense of inevitability. We have this sense that the way the world is today, that the story we tell ourselves about the world is somehow true.

I was shaped very early on in my life by the poetry of Wallace Stevens. He talked about reality as the quest for a supreme fiction. What is it we can collectively believe? We're trying to create an aesthetic vision for each other about the way the world ought to work.

Of course, this is also the subject of Yuval Noah Harari's book, Sapiens. He goes through the history of humanity as the evolution of collective beliefs that allow us to act in certain ways we were not previously able to, pointing out that religion and politics were ways to get larger groups of people to act in concert. Money is a fiction to build an economy. Debt. He talks about the invention of the future as something that you could invest in and move towards. In a similar way, we have to look at the beliefs we have that limit what we can do as a society.

That's the first "we" that I'm talking about.

There's a much more proximate "we," which is the set of politicians and intellectual actors of various kinds who shape that collective consciousness. One of the things that's interestingly different today is how much that collective consciousness is now shaped by media on the Internet—by Google, and by Facebook. We've seen this with the dark turn that we've taken as a result of fake news, where these systems have reinforced and brought out and amplified dark beliefs. One of the things that's somewhat dangerous is that we're feeding the beast.

There is a role for real leadership. I just started reading Ira Katznelson's book about the New Deal called Fear Itself. He talked a lot about the belief in the '30s that America needed a Mussolini, a strong leader who would knock heads and get everybody moving in the right direction, and that liberal democracies were not able to pull it together to have concerted action and clear focus.

We're back there today, where we think that nothing can be done through the political process. We need inspired leadership that is not just responding to micro-targeting and trying to manipulate people, but which comes from a vision and is able to communicate that vision and get people to believe in it.

History is full of people who have amazing persuasive skills. Think of Napoleon escaping from Elba and coming back, despite having been defeated. All of France rises again. This guy had an amazing ability to compel belief. Obviously, you see that in today's politics. Trump has been able to bring out the beliefs of an untapped set of people. In a similar way, but I hope a much more positive way, we need leaders who can summon up a vision of a more just, a more fair, a more equitable world, one in which we tackle the great problems we face today.

Look at something like climate change. Yes, it's great to see that someone like Elon Musk is trying to move the ball forward to show that you can make a successful business in electric cars, in solar energy, and for that matter, that you can go into space. Elon is a dream maker for people. He is resetting the expectations of what's possible. But we also need that in the political realm.

The one thing I find hopeful in the rise of Donald Trump is that he's broken a lot of the old paradigms, a lot of the old ideas that have to go in lockstep. But can we invent a new map of the world that makes more sense? That reinvents what's possible?

The thing that worries me a lot, and I think about this in the context of my history of being a classicist, and you look at the fall of Rome—people lose focus and will, they become reactive, and by the time the real crisis comes, they don't have the capacity anymore to rise to it. That's the thing I worry about.

If you think about climate change, we should have acted sooner. Not only is the clock ticking and the problem getting worse, but we're going to lose our capacity to respond because so much of our productive capacity is going to be just dealing with crisis. Look what happened in Houston, and what's happening in Florida. We've just got to repair.

I don't expect change to happen in an instant, but is it happening fast enough for us to avoid disaster? Our current system is not up to the challenges. We are now struggling with this great crisis of climate change. Long before he was worried about superintelligence, Nick Bostrom wrote a fantastic piece about Fermi's paradox. He said something like, "Every time there's a new discovery of exoplanets, I'm elated because it means there's a greater chance that there's life elsewhere in the universe. I'm also dismayed because it means that the great winnowing is still ahead of us instead of behind us." If there were very few planets and we were on one of them, then maybe nobody's out there because life is rare. But if planets are common and life is rare, then the question of "where are they?" becomes much more frightening because it says that civilizations don't last.

I read that piece and I thought a lot about it in the context of climate change. It made me wonder if one of the possibilities might be that we use up all the cheap fossil fuel, and civilization falls as a result of climate change. We don't necessarily get entirely wiped out, but that source of cheap available energy is not there when we come back, so we don't build a technological civilization. We get back up to maybe the level of the Victorian era because that's what you can do with the energy sources that are available. That would be a possible answer to Fermi's paradox.

But I digress. I do think the world is going to intervene. One of the big fallacies that we all live with is that the world is somehow under our control. The lesson of history of course is that unexpected things happen. Things happen in different parts of the world. Natural disasters happen. There are so many possibilities that could completely upset our world, sometimes in a terrible way, but then in a much better way.

One of the things that broke the feudal system was the Black Plague. Suddenly there were fewer workers, and they became much more powerful as a result. They became much more valuable to people. There can be unexpected things in our future. Again, I'm not trying to say we should just rely on that.

I think a lot about the wonderful insight from scenario planning—what you want to have is a robust strategy, that is, something that will be useful in a wide range of possible futures. We have a possible future in which there is global conflict. We have a possible future in which there is a natural disaster. We have a possible future in which we make a just egalitarian society where the world is more productive and better off than it's ever been, and it has a very different economic system than we have today. All of these are possibilities.

So what should we be doing? The job that I'm trying to do right now is to articulate a narrative. In some ways, it's the biggest narrative that I've ever tried in my life. A lot of what I've done in my career has been reframe the narrative, redraw the map.

Now, I'm trying to reframe this question of what do computer platforms and their rise and fall tell us about society and economies and their rise and fall? There's a wonderful conclusion that Ryan Avent comes to at the end of his book, The Wealth of Humans, which is that generosity is the robust strategy.

Bob Putnam—the famous sociologist and author of Bowling Alone and many other books—when we were in a Markle Foundation working group about the future of the economy, said to us, "Every great advance in our society has come when we have made investments in other people's children." He was referring to universal grade school education, universal high school education, and the G.I. Bill. These investments in the future that are not self-interested have in fact been such a huge source of advancement for our society. We have to figure out what are the right investments in the future are for the 21st century.

Right now, the problem in our politics is it's so backward-looking. We have a set of people who are telling a story about how the old days were the good old days, and we just need to go back to the old policies of the Great Society, or we have to go back to some conservative ideal.

We have to make it new. That's a wonderful line from Ezra Pound that's always stuck in my brain: "Make it new." It's not just true in literature and in art, it's in our social conscience, in our politics. We have look at the world as it is and the challenges that are facing us, and we have to throw away the old stuck policies where this idea over here is somehow inescapably attached to this other idea. Just break it all apart and put it together in new ways, with fresh ideas and fresh approaches.

One of the other things that we can learn from technology platforms, and that we have to put into practice on a larger scale in our economics and in our social thinking is data-informed decision-making. At Google, or Facebook, or Amazon, they're running millions of experiments in which they're trying new combinations of data, new combinations of software, new combinations of user interface. They're measuring what happens, and then responding and adapting.

What you look at in our political life is somebody frames up an idea, we encode it in incredibly complex systems, paper-based systems, and then they don't get updated very often. They get updated every twenty or thirty years, or they get updated simply by piling more stuff on. We don't say, "Well, that didn't work, take it out." Laws don't go away, they just get slightly modified and move forward.

We need to refresh our politics. I'm hoping for some bold leadership to emerge over the next couple of elections, where we break the old lock of the parties, and where we have fresh thinking. It's pretty clear that just a rehash of what went before is not going to be sufficient for the problems that we face.

Wallace Stevens had an immense insight into the way that we write the world. We don't just read it, we don't just see it, we don't just take it in. In "An Ordinary Evening in New Haven," he talks about the dialogue between what he calls the Naked Alpha and the Hierophant Omega, the beginning, the raw stuff of reality, and what we make of it. He also said “reality is an activity of the most august imagination.”

Our job is to imagine a better future, because if we can imagine it, we can create it. But it starts with that imagination. The future that we can imagine shouldn't be a dystopian vision of robots that are wiping us out, of climate change that is going to destroy our society. It should be vision of how we will rise to the challenges that we face in the next century, that we will build an enduring civilization, and that we will build a world that is better for our children and grandchildren and great-grandchildren. That we will become one of those long-lasting species rather than a flash in the pan that wipes itself out because of its lack of foresight.

We are at a critical moment in human history. In the small, we are at a critical moment in our economy, where we have to make it work better for everyone, not just for a select few. But in the large, we have to make it better in the way that we deal with long-term challenges and long-term problems.

We can't just expect somehow that the market will magically come up with solutions. We have to rise to the challenges through political leadership, through intellectual leadership, even through religious leadership. We have to have a moral revolution in ourselves, where we come to believe different things about what should be and what will be because we make it so.