How Technology Changes Our Concept of the Self

How Technology Changes Our Concept of the Self

Peter Galison [11.20.18]

The general project that I’m working on is about the self and technology—what we understand by the self and how it’s changed over time. My sense is that the self is not a universal and purely abstract thing that you’re going to get at through a philosophy of principles. Here’s an example: Sigmund Freud considered his notion of psychic censorship (of painful or forbidden thoughts) to be one of his greatest contributions to his account of who we are. His thoughts about these ideas came early, using as a model the specific techniques that Czarist border guards used to censor the importation of potentially dangerous texts into Russia. Later, Freud began to think of the censoring system in Vienna during World War I—techniques applied to every letter, postcard, telegram, and newspaper—as a way of getting at what the mind does. Another example: Cyberneticians came to a different notion of self, accessible from the outside, identified with feedback systems—an account of the self that emerged from Norbert Wiener’s engineering work on weapons systems during World War II. Now, I see a new notion of the self emerging; we start by modeling artificial intelligence on a conception of who we are, and then begin seeing ourselves ever more in our encounter with AI.

PETER GALISON is the Joseph Pellegrino University Professor of the History of Science and of Physics at Harvard University and Director of the Collection of Historical Scientific Instruments. Peter Galison's Edge Bio Page

HOW TECHNOLOGY CHANGES OUR CONCEPT OF THE SELF

When people talk about abstract ideas, they’re referring to something very concrete, and when I can find a way to address something that’s very broad through something specific and tangible that’s right there now or right there then, it interests me. This is how I understand abstract ideas, whether in theoretical physics or philosophy. What appears at first as fully ethereal is often, in fact, grounded in something particular. When people start a debate about time, or freedom, or secrecy, or objectivity, I want to know what specific things and actions they are talking about.

The general project that I’m working on is about the self and technology—what we understand by the self and how it’s changed over time. My sense is that the self is not a universal and purely abstract thing that you’re going to get at through a philosophy of principles. Here’s an example: Sigmund Freud considered his notion of psychic censorship (of painful or forbidden thoughts) to be one of his greatest contributions to his account of who we are. His thoughts about these ideas came early, using as a model the specific techniques that Czarist border guards used to censor the importation of potentially dangerous texts into Russia. Later, Freud began to think of the censoring system in Vienna during World War I—techniques applied to every letter, postcard, telegram, and newspaper—as a way of getting at what the mind does. Another example: Cyberneticians came to a different notion of self, accessible from the outside, identified with feedback systems—an account of the self that emerged from Norbert Wiener’s engineering work on weapons systems during World War II. Now, I see a new notion of the self emerging; we start by modeling artificial intelligence on a conception of who we are, and then begin seeing ourselves ever more in our encounter with AI.

Different ages have had different concepts of the self, organized around what you might broadly think of as technology; for example, as I mentioned, the technology of maintaining a government lock on censorship in the time of Freud, when they began censoring letters—I want to know how letters, telegrams, newspapers, and magazines got their black (or white) blocked-out areas. I want to know how Freud reacted to this blocking of knowledge, how he began to reshape what he thought the mind was up to when it imposed its own blackout at the borders, so to speak, between the territories of the unconscious and pre-conscious or the pre-conscious and the conscious. Dangerous thoughts, he argued, arrived like couriers bearing potentially dangerous letters at the censor-guards of our internal mental border controls.

I’ve been interested, too, in the transformation that occurs during and just after World War II. Norbert Wiener and his colleagues and successors pursued a new kind of picture of the self that was based on very concrete experiences that people had with World War II technology. Early in the war, it looked like the Germans were going to bomb the hell out of England, destroy their air defenses, and wreck their industry. If they could destroy the British fighter jets and bomb their central production and population centers around factories, they thought they could invade Britain, and it would have been the end of World War II. The United States would have had no foothold in Europe. It would have transformed the world.

Norbert Wiener knew he had to get involved, so he began to think about how we were going to shoot down these bombers. He began to think about learning in a different kind of way. Norbert Wiener said at the beginning of World War II, "The physicists are not going to save us, not in the short term, not in this period of the Battle of Britain. We need people who understand telecommunications, people who understand this new concept of information, not abstract physics ideas about signal-to-noise ratios. We need a concrete understanding of time series of data so we can make a machine that can learn the way a German bomber pilot is moving his aircraft around in the sky, so the tracking machine can anticipate where the plane will likely be seven or eight seconds in the future. Only with that knowledge could the anti-aircraft batteries put a shell there and destroy the airplane."

The problem was that you couldn't shoot it where the airplane was, you had to shoot it 10,000, 12,000 feet in the sky or higher, and you’d have to loft a shell that was going to be where the plane was going to go. You could make a linear extrapolation of the plane’s general direction, but that doesn’t tell you if a particular pilot is jinking left and right in certain patterns. So, Wiener thought to make a machine that learns how that particular pilot moves—obviously, you can’t ask the pilot about intention. The Luftwaffe pilot was most certainly not going to tell you and might not even know precisely how to characterize his future actions. The machine has to figure it out from the radar traces of recent past behaviors.

Wiener did make that learning machine, though the project never completely succeeded. He was able to anticipate around two seconds into the future, but he needed to do much more to make an effective weapon. It taught an incredibly new lesson to people, which is that even in the absence of any concrete understanding of the interior life of, say, bomber pilots, just by their exterior actions, you could anticipate what they would do in the future and, in this case, send a projectile up to shoot the plane down.

Wiener used his experience of designing the "predictor" as he folded back the ideas to re-describe, in feedback black-box style, how the anti-aircraft operators would work. Wiener and his successors after the war began to think about other actions, about the way we guide our hands—proprioceptively fed-back motions where we know where our hand is as it converges onto the coffee cup that we intend to lift. Instead of thinking of the self as some abstract mental category floating in a vacuum, these were concrete problems of figuring out how machines could anticipate shifted attention to inputs and outputs, to statistical characterization and prediction. Intention (in the cybernetic reading) became nothing else than goal-directed feedback process.

Cybernetics was—as I see it—a transformative moment in our understanding of the human self. I want to look at this black box anticipatory function of cybernetics that represented what I think of as the fundamental shift of machine learning, cybernetics, and the self. We are now, I believe, in another phase of this series of transformations of the self, again in ways that go back and forth between how we see the self and the technology that surrounds us. What do we understand intelligence to be such that we want to form artificial versions of it? Then: we face AI everywhere from assistance in writing personal letters and responses to daily questions… all the way to algorithmic sentencing and the formation of scientific conclusions.

Many of us writing for and with Edge have been concerned about the future of AI. My biggest proximate concern about AI tends to focus, unsurprisingly perhaps, on the concrete aspects: job displacement, contributing to an ever-growing societal inequality—that worries me hugely. My secondary concern is that in the multi-layered neural networks, we are ever less able to extract the reasoning process behind decisions that bear on judgments about law or for that matter, science. Far below these concerns (for me) fall the nightmarish imagined scenarios of swarms of killer nanobots—I am less worried by what seem to me rather implausible outcomes. But the proximate future, one that might displace millions of workers from their jobs (like driving), one where judicial sentencing is more dependent on AI algorithms, autonomous weapons that are supposed to distinguish, willy-nilly, friend from foe… well, these things could tip anger and grievance into an even worse political landscape than we face today.

Back to our main theme: I want to know how, in the early 20th century, in the midst and aftermath of World War II, then turn of the 21st century, our notions of the self make certain technologies seem possible and desirable; and reciprocally how our technologies then act back on what we consider the self to be.

Let me give another example of the materiality of abstraction—from a completely different sector of work. Like many others, I’ve been fascinated by the Enlightenment, the 18th century, which is often described in these purely intellectual terms and almost always European terms—an idea of the intelligibility of nature and its functions. An ode, so to speak, to the primacy of reason. And yet it is a little hard for me to understand in those terms. Recently, we had an exhibit at the Fogg Art Museum called "The Philosophy Chamber" (an actual wood-paneled room) that assembled teaching, learning, theology, and scientific work in 18th-century Harvard. I wanted to do something for that. I run a small museum (The Collection of Historical Scientific Instruments) collection that began with the assembly of instruments by Benjamin Franklin and others in the 1770s. We lent some of these brass and glass telescopes, microscopes and orreries that were used to represent, teach, and probe our understanding of the known world.

The curators and scholars who were mounting the exhibit asked me whether I could write an essay, but I had an idea for a film, so I said I'd make a film. The way that people in the 18th century and 17th and 16th century understood things was often through the ancestral form of what now appears as collegiate debate. Except, back in the 18th century they were different kinds of things, not mere entertainment and competition, but instead a test probe of knowledge itself. In the mid- to-late 18th century there was a new form of this called a forensic disputation. This new form of disputation went far beyond the older, strictly structured and purely logical syllogistic disputations of earlier times—syllogism is, for example, all men are mortals, Socrates is a man, therefore, Socrates is mortal—and this new form of disputation allowed you to use the full gamut of argumentation, of pathetic (emotional) arguments, reductio arguments, analogies, ethical considerations; importantly, they were in English not Latin, very often on topics of immediate political interest (not only biblical or classical subjects). I thought at first that I'd find some debate, some of these disputations from overseas, perhaps Germany or Britain. Then I wondered… well, maybe there’s one that survives from Harvard in the 1770s. It turns out there were hundreds of titles, but one, and only one, verbatim disputation.

That one debate that survives completely intact was published—and there are even traces of it in then archives. It's called A Forensic Dispute on the Legality of Enslaving the Africans, about whether it’s compatible with what they call natural law. Here was a chance to see how the urgent question of slavery was addressed in the turbulent moment. It was July 21, 1773, right on the eve of the American Revolution, at Harvard. People came from far and wide to see this public dispute between two graduating seniors, one of whom went on to become the president of Harvard from 1804 to 1806. His name was Eliphalet Pearson, a mountain of a man, a bully, a martinet. People were frightened of him—the students, even his colleagues were intimidated by him and disliked him intensely. Some called him "Elephant" behind his back, others, later, dubbed him "Megalonyx" after an extinct prehistoric mammoth-like beast found by Jefferson. His opponent was a brilliant young man named Theodore Parsons who helped found the American Academy of Arts and Sciences.

Both Eliphalet and Theodore grew up Newburyport, just north of Boston, and had known each other since childhood, and they were graduating in the same class. So, they were assigned probably the two sides, and, remarkably, we have in detail what they said. Using their words, I wrote this into a back-and-forth, and then had hovering in the present of this a third person—the completely remarkable young woman named Phillis Wheatley, a famous poet who was kidnapped and enslaved at the age of eight—sold into slavery in Boston about three miles from Harvard Yard. She became a fantastically well-known poet and began writing and publishing. When she was fourteen she learned English, Greek, Latin, astronomy. In 1773, her book came out just about the same time as these two boys’ book. They’re all the same age, all twenty-one years old. I wrote this into a short play and filmed it with three young actors (undergraduates at Harvard) from the American Repertory Theater. I was delighted to be able to work with Henry Louis Gates as a collaborator on direction—he had written a terrific book that I have long admired about this founder of African-American literature, The Trials of Phillis Wheatley.

So, this is a way of trying to make concrete a kind of turning point moment in history where they were talking about the Enlightenment and what the limits of liberty were. How did university arguments actually proceed when the stakes were high? Could America have liberty and slavery? Whose liberty would it be? These young people, smart, articulate, inexperienced, bent around sometimes by their own words, were trying to sort out this question—the role of democracy and freedom, Christianity and slavery. It is the kind of specific engagement I want to help make clear arguments in formation, the standards and forms of how one could speak with what kinds of tools. When the film and the accompanying article were done, the short, "No More, America," went up on a large wall (the Light Box) in the Fogg, then it traveled to the Hunterian Museum in Glasgow. People often don’t realize that two percent of the population in Massachusetts was enslaved in the 1760s. The specificity of a now-archaic form of disputation, the concreteness afforded by film, the way spoken disputation sounds—the words of particular 18th-century young people grappling with these questions—is an example of how film, through sound and image, has a way of getting at the concrete specificity I am after.