HOW TO BE A SYSTEMS THINKER
At the moment, I’m asking myself how people think about complex wholes like the ecology of the planet, or the climate, or large populations of human beings that have evolved for many years in separate locations and are now re-integrating. To think about these things, I find that you need something like systems theory. So, I went back to thinking about systems theory two or three years ago, which I hadn’t for quite a long time.
What prompted it was concern about the state of the world. One of the things that we’re all seeing is that a lot of work that has been done to enable international cooperation in dealing with various problems since World War II is being pulled apart. We’re seeing the progress we thought had been made in this country in race relations being reversed. We’re seeing the partial breakup—we don’t know how far that will go—of a united Europe. We’re moving ourselves back several centuries in terms of thinking about what it is to be human, what it is to share the same planet, how we’re going to interact and communicate with each other. We’re going to be starting from scratch pretty soon.
Two or three years ago, I started getting invited to do things with the American Society for Cybernetics. I kept saying that I hadn't done anything or thought about that for years, but they persisted. I was invited to write a chapter for a huge handbook called The Handbook of Human Computation. Basically, what they meant by human computation is human-computer collaboration of various sorts. I told them I didn't know anything about that, and they said, "Since you don’t have time to write a chapter, please write the preface." I asked how I would do that if I couldn't write a chapter, and they said, "We'll send you all the abstracts." I became quite cranky and told them to get someone else to do it, but they kept sending me things to read. First, I searched Google for what human computation was, and I found that I did know about some corners of the field. So, I wrote everything I knew about human computation, sent it in, and said to them, "See, I don’t know anything about it." They published it.
Then I went to the conference and I started to get back in the conversation with people working on AI. I had realized that I’d learned an awful lot as quite a young person, even as a child, from my parents, who were involved in the Cybernetic Macy Conferences right through the ‘50s. They and other figures that were involved, like Warren McCulloch or many other people, were drifting through the house and having conversations all the time, and I was listening.
I didn’t go straight to AI; I was nibbling at edges of it. I had realized that our capacity to think about complex interactive systems seemed to be falling apart, that a great many efforts towards international cooperation were falling apart; states that involved multiple ethnic systems or dialects were breaking up; and, indeed, societies like the United States, with many ethnic groups and racial groups, were having a progressively harder time trying to cooperate.
We all think with metaphors of various sorts, and we use metaphors to deal with complexity, but the way human beings use computers and AI depends on their basic epistemologies—whether they’re accustomed to thinking in systemic terms, whether they’re mainly interested in quantitative issues, whether they’re used to using games of various sorts. A great deal of what people use AI for is to simulate some pattern outside in the world. On the other hand, people use one pattern in the world as a metaphor for another one all the time.
Americans are inclined to talk about the "war against drugs," or the "war against poverty," or the "war against cancer," without questioning whether "war" is an appropriate metaphor. It’s a way of talking about complexity, but if it doesn’t fit, it will cause you to make errors in how you deal with your problems. The war on poverty failed partly because poverty is not something you can defeat, and that makes warfare an inappropriate metaphor. The same is true with the war on drugs, which has gotten us into some ugly situations.
One of the problems when you bring technology into a new area is that it forces you to oversimplify. That is, the possibilities of AI have been there from the very beginning of thinking about computers, but there's always this feeling of disappointment that there are limitations to what you can do. We keep attempting to do more complex things.
Until fairly recently, computers could not be said to learn. To create a machine that learns to think more efficiently was a big challenge. In the same sense, one of the things that I wonder about is how we'll be able to teach a machine to know what it doesn’t know but that it might need to know in order to address a particular issue productively and insightfully. This is a huge problem for human beings. It takes a while for us to learn to solve problems. And then it takes even longer for us to realize what we don’t know all that we would need to know to solve a particular problem, which obviously involves a lot of complexity.
How do you deal with ignorance? I don’t mean how do you shut ignorance out. Rather, how do you deal with an awareness of what you don’t know, and you don’t know how to know, in dealing with a particular problem? When Gregory Bateson was arguing about human purposes, that was where he got involved in environmentalism. We were doing all sorts of things to the planet we live on without recognizing what the side effects would be and the interactions. Although, at that point we were thinking more about side effects than about interactions between multiple processes. Once you begin to understand the nature of side effects, you ask a different set of questions before you make decisions and projections and analyze what’s going to happen.
The same thing is true, for instance, with drug testing. The first question people ask is, "Does the drug work?" But the next question should be, "What else does the drug do besides dealing with the pathology?" A certain number of drugs get pulled off the market every year when people realize that the long-side effects may be more serious than what they’re trying to correct.
What the analog to that in the computer world is, I don’t know. What we do is try to set up processes for problem solving and supply the data for analysis, but we don’t give the machine a way of saying, "What else should I know before I look at this question?" There has been so much excitement and sense of discovery around the digital revolution that we’re at a moment where we overestimate what can be done with AI, certainly as it stands at the moment.
One of the most essential elements of human wisdom at its best is humility, knowing that you don’t know everything. There’s a sense in which we haven’t learned how to build humility into our interactions with our devices. The computer doesn’t know what it doesn’t know, and it's willing to make projections when it hasn’t been provided with everything that would be relevant to those projections. How do we get there? I don’t know. It’s important to be aware of it, to realize that there are limits to what we can do with AI. It’s great for computation and arithmetic, and it saves huge amounts of labor. It seems to me that it lacks humility, lacks imagination, and lacks humor. It doesn’t mean you can’t bring those things into your interactions with your devices, particularly, in communicating with other human beings. But it does mean that elements of intelligence and wisdom—I like the word wisdom, because it's more multi-dimensional—are going to be lacking.
As a child, I had the early conversations of the cybernetic revolution going on around me. I can look at examples and realize that when one of my parents was trying to teach me something, it was directly connected with what they were doing and thinking about in the context of cybernetics.
One of my favorite memories of my childhood was my father helping me set up an aquarium. In retrospect, I understand that he was teaching me to think about a community of organisms and their interactions, interdependence, and the issue of keeping them in balance so that it would be a healthy community. That was just at the beginning of our looking at the natural world in terms of ecology and balance. Rather than itemizing what was there, I was learning to look at the relationships and not just separate things.
Bless his heart, he didn’t tell me he was teaching me about cybernetics. I think I would have walked out on him. Another way to say it is that he was teaching me to think about systems. Gregory coined the term "schismogenesis" in 1936, from observing the culture of a New Guinea tribe, the Iatmul, in which there was a lot of what he called schismogenesis. Schismogenesis is now called "positive feedback"; it’s what happens in an arms race. You have a point of friction, where you feel threatened by, say, another nation. So, you get a few more tanks. They look at that and say, "They’re arming against us," and they get a lot more tanks. Then you get more tanks. And they get more tanks or airplanes or bombs, or whatever it is. That’s positive feedback.
The alternative would be if you saw them getting tanks to say, "I’d better get rid of my tanks. Let’s cool the arms race, instead of mutually escalating." Gregory was talking about that and didn’t really have a term for it, so he invented the term schismogenesis. Genesis to mean bringing into being greater and greater schisms, conflicts. That was before the concept of positive feedback had been coined. That’s what he was talking about, the kind of feedback that accelerates a process rather than controls it, which is a very important concept.
I would say that the great majority of Americans still believe that "positive feedback" is when someone pats you on the back and says you did a good job. What positive feedback is saying is, do more of the same. So, if what you’re doing is taking heroin or quarreling with your neighbor, this is just going to lead to trouble. Negative feedback corrects what you’re doing. It’s not somebody saying, "That was a lousy speech." It’s somebody saying, "Reverse course. Stop building more bombs. Stop taking in more alcohol faster. Slow down." Negative feedback is corrective feedback.
Gregory then wrote a paper about an arms race and made the move from thinking about the New Guinea tribe to the nature of arms races in the modern world, which we still have plenty of.
At the beginning of the war, my parents, Margaret Mead and Gregory Bateson, had very recently met and married. They met Lawrence K. Frank, who was an executive of the Macy Foundation. As a result of that, both of them were involved in the Macy Conferences on Cybernetics, which continued then for twenty years. They still quote my mother constantly in talking about second-order cybernetics: the cybernetics of cybernetics. They refer to Gregory as well, though he was more interested in cybernetics as abstract analytical techniques. My mother was more interested in how we could apply this to human relations.
My parents looked at the cybernetics conferences rather differently. My mother, who initially posed the concept of the cybernetics of cybernetics, second-order cybernetics, came out of the anthropological approach to participant observation: How can you do something and observe yourself doing it? She was saying, "Okay, you’re inventing a science of cybernetics, but are you looking at your process of inventing it, your process of publishing, and explaining, and interpreting?" One of the problems in the United States has been that pieces of cybernetics have exploded into tremendous economic activity in all of computer science, but much of the systems theory side of cybernetics has been sort of a stepchild. I firmly believe that it is the systems thinking that is critical.
At the point where she said, "You guys need to look at what you’re doing. What is the cybernetics of cybernetics?" what she was saying was, "Stop and look at your own process and understand it." Eventually, I suppose you do run into the infinite recursion problem, but I guess you get used to that.
How do you know that you know what you know? When I think about the excitement of those early years of the cybernetic conferences, there have been several losses. One is that the explosion of devices and manufacturing and the huge economic effect of computer technology has overshadowed the epistemological curiosity on which it was built, of how we know what we know, and how that affects decision making.
If you use the word "cyber" in our society now, people think that it means a device. It does not evoke the whole mystery of what maintains balance, or how a system is kept from going off kilter, which was the kind of thing that motivated the question in the first place. It’s probably not the first time that’s happened, that a technology with a very wide spectrum of uses has been so effective for certain problems that it’s obscured the other possible uses.
People are not using cybernetic models as much as they should be. In thinking about medicine, for instance, we are thinking more than we used to about what happens when fifty years ago you had chicken pox and now you have shingles. What happened? How did the virus survive? It went into hiding. It took a different form. We’re finding examples of problems that we thought we’d solved but may have made worse.
We have taller smoke stacks on factories now, trying to prevent smog and acid rain. What we’re getting is that the fumes are traveling further, higher up, and still coming down in the form of acid rain. Let’s look at that. Someone has tried to solve a problem, which they did—they reduced smog. But we still put smoke up the chimney and think it disappears. It isn't gone. It’s gone somewhere. We need to look at the entire system. What happens to the smoke? What happens to the wash-off of fertilizer into brooks and streams? In that sense, we’re using the technology to correct a problem without understanding the epistemology of the problem. The problem is connected to a larger system, and it’s not solved by the quick fix.
If you look back at the cybernetics conferences, you’d find a lot of examples that could be applied to social and human problems that have not been. Most people don’t learn about cybernetics. They buy devices. Cybernetics, because it developed a whole branch of communication theory, is a way of thinking, not an industry. In our relations with other nations, for instance, we get caught in schismogenesis—arms races, competitions, escalations of various sorts—without people being aware that that’s what’s happening, without them thinking through what needs to be attended to in order to solve a problem.
We think that we can solve drug addiction by punitive police enforcement. Doesn’t work. In fact, it makes more jobs for policemen and prison guards. We are not using systems theory to think about social problems most of the time. Business problems, yes. There are specialists. Business schools even teach systems theory. But we’re not raising our children to be systems thinkers. That’s what we need to do.
You don’t have to know a lot of technical terminology to be a systems thinker. One of the things that I’ve been realizing lately, and that I find fascinating as an anthropologist, is that if you look at belief systems and religions going way back in history, around the world, very often what you realize is that people have intuitively understood systems and used metaphors to think about them. The example that grabbed me was thinking about the pantheon of Greek gods—Zeus and Hera, Apollo and Demeter, and all of them. I suddenly realized that in the mythology they’re married, they have children, the sun and the moon are brother and sister. There are quarrels among the gods, and marriages, divorces, and so on. So you can use the Greek pantheon, because it is based on kinship, to take advantage of what people have learned from their observation of their friends and relatives.
It turns out that the Greek religious system is a way of translating what you know about your sisters, and your cousins, and your aunts into knowledge about what’s happening to the weather, the climate, the crops, and international relations, all sorts of things. A metaphor is always a framework for thinking, using knowledge of this to think about that. Religion is an adaptive tool, among other things. It is a form of analogic thinking.
The other thing that I like to talk about is that we carry an analog machine around with us all the time called our body. It’s got all these different organs that interact; they’re interdependent. If one of them goes out of kilter, the others go out of kilter, eventually. This is true in society. This is how dis-ease spreads through a community, because everything is connected.
There are a couple of other things that are very striking. If you look at the Old Testament, the Hebrew Scriptures, what you see—which you can also see in young children—is that they start from the differences between things. Mommy’s not the same as Daddy. Daddy’s not the same as brother. I can remember my daughter learning the word "Goggy," which obviously was "Doggy." But then she said that the cow is a "Goggy," because it had four legs, I guess. But then you have to learn to distinguish the cow from the dog. When we think about a child developing, you have to learn to distinguish between things—this is this and that is that. Starting with the Book of Genesis, each thing is created separately. They don’t evolve by differentiation. God separates the day from the night, the light from the dark, the dry land from the water. And then you end up with a large number of rules of things that have to be kept separate. You can’t weave two different kinds of fibers into the same fabric. You can’t plow with an ox and an ass, but must use two oxen.
What you have is this process of differentiation, which is intellectually profound but only a beginning. Taxonomy is an essential basis for all we know about the natural world. We have learned to classify. A bee is not a butterfly. You can see that stage in many forms of religion and mythology. And then in some later forms, the switch is from making distinctions to recognizing relationships.
What comes along if you look at the New Testament is Jesus keeps violating all the rules about keeping things separate, which makes people angry, because that’s what they’ve been taught. He’s constantly posing the question, "What’s the connection?" And not, "What’s the difference?" You can see that this constant necessity of recognizing that things are separate and different and can be used in different ways, and then seeing that everything is connected, and how it’s connected and interdependent, that this is a sort of permanent balance in human intellect. If you look at the history of mythology, you can see people moving slowly forward. You can look at the history of science—things that were once equated we now see as separate. We can only go so far in breaking down more and more elementary particles, but we're still finding particles. We’re still interested in the separation of things, but we’re also still discovering relationships.
I’ve become very much involved in issues around climate change. Climate change comes from proceeding on one path without recognizing how that will affect other aspects of our reality. Take it another step, one of the things that’s hard to get across to people is that when human beings are uncomfortable, they fight, or move. At this point we have a refugee crisis, migrations, people leaving areas where their ways of making a living don’t work any longer because of climate change. We also have conflict happening as one country wants to control more arable land—Lebensraum. So, people are fighting about land, or about fishing rights.
Most people don’t realize it, but a myth has been put together about the so-called Arab Spring of a few years ago, where many Americans said, "Oh, good, they’re rebelling against their authoritarian governments and they’re going to become democratic." Well, they didn’t. The cause of the Arab Spring was a five-year drought, with a lot of people having difficulty feeding their families, so they migrated from the villages to the cities, looking for jobs where they would be paid money and could buy food for their families. But there were no jobs in the cities, so they had revolutions.
The tragedy of the cybernetic revolution, which had two phases, the computer science side and the systems theory side, has been the neglect of the systems theory side of it. We chose marketable gadgets in preference to a deeper understanding of the world we live in.