EDGE


EDGE 35 — February 27, 1998



THE THIRD CULTURE

"CONSCIOUSNESS IS A BIG SUITCASE"
A Talk with Marvin Minsky

My goal is making machines that can think—by understanding how people think. One reason why we find this hard to do is because our old ideas about psychology are mostly wrong. Most words we use to describe our minds (like "consciousness", "learning", or "memory") are suitcase-like jumbles of different ideas. Those old ideas were formed long ago, before 'computer science' appeared. It was not until the 1950s that we began to develop better ways to help think about complex processes.


"The Third Culture"
by Kevin Kelly

...science has always been a bit outside society's inner circle. The cultural center of Western civilization has pivoted around the arts, with science orbiting at a safe distance. When we say "culture," we think of books, music, or painting. Since 1937 the United States has anointed a national poet laureate but never a scientist laureate. Popular opinion has held that our era will be remembered for great art, such as jazz. Therefore, musicians are esteemed. Novelists are hip. Film directors are cool. Scientists, on the other hand, are ...nerds.


EDGE IN THE NEWS

Digerati chronicler John Brockman hand-picked the best of breed at last week's TED [Technology, Entertainment, Design] conference to attend his yearly soiree, where technology's philosopher-kings and queens mused on all things Internet, multimedia and business.

From "World Domination, Corporate Cubism and Alien Mind Control at Digerati Dinner", Upside.Com , February 23, 1998 by Trish Williams


THE REALITY CLUB

J.C. Herz and Reuben Hersh on Verena Huber-Dyson


(9,226 words)


John Brockman, Editor and Publisher | Kip Parent, Webmaster



THE THIRD CULTURE



" CONSCIOUSNESS IS A BIG SUITCASE"
A Talk with Marvin Minsky


"[People] like themselves just as they are," says Marvin Minsky. "Perhaps they are not selfish enough, or imaginative or ambitious. Myself, I don't much like how people are now. We're too shallow, slow, and ignorant. I hope that our future will lead us to ideas that we can use to improve ourselves."

Marvin believes that it is important that we "understand how our minds are built, and how they support the modes of thought that we like to call emotions. Then we'll be better able to decide what we like about them, and what we don't—and bit by bit we'll rebuild ourselves."

Marvin Minsky is the leading light of AI—that is, artificial intelligence. He sees the brain as a myriad of structures. Scientists who, like Minsky, take the strong AI view believe that a computer model of the brain will be able to explain what we know of the brain's cognitive abilities. Minsky identifies consciousness with high-level, abstract thought, and believes that in principle machines can do everything a conscious human being can do.

"Marvin Minsky is the smartest person I've ever known," computer scientist and cognitive researcher Roger Schank points out. "He's absolutely full of ideas, and he hasn't gotten one step slower or one step dumber. One of the things about Marvin that's really fantastic is that he never got too old. He's wonderfully childlike. I think that's a major factor explaining why he's such a good thinker. There are aspects of him I'd like to pattern myself after. Because what happens to some scientists is that they get full of their power and importance, and they lose track of how to think brilliant thoughts. That's never happened to Marvin."

JB

MARVIN MINSKY is a mathematician and computer scientist; Toshiba Professor of Media Arts and Sciences at the Massachusetts Institute of Technology; cofounder of MIT's Artificial Intelligence Laboratory, Logo Computer Systems, Inc., and Thinking Machines, Inc.; laureate of the Japan Prize (1990), that nation's highest distinction in science and technology; author of seven books, including The Society of Mind.


"CONSCIOUSNESS IS A BIG SUITCASE"
A Talk with Marvin Minsky


MINSKY: My goal is making machines that can think—by understanding how people think. One reason why we find this hard to do is because our old ideas about psychology are mostly wrong. Most words we use to describe our minds (like "consciousness", "learning", or "memory") are suitcase-like jumbles of different ideas. Those old ideas were formed long ago, before 'computer science' appeared. It was not until the 1950s that we began to develop better ways to help think about complex processes.

Computer science is not really about computers at all, but about ways to describe processes. As soon as those computers appeared, this became an urgent need. Soon after that we recognized that this was also what we'd need to describe the processes that might be involved in human thinking, reasoning, memory, and pattern recognition, etc.

JB: You say 1950, but wouldn't this be preceded by the ideas floating around the Macy Conferences in the '40s?

MINSKY: Yes, indeed. Those new ideas were already starting to grow before computers created a more urgent need. Before programming languages, mathematicians such as Emil Post, Kurt Gödel, Alonzo Church, and Alan Turing already had many related ideas. In the 1940s these ideas began to spread, and the Macy Conference publications were the first to reach more of the technical public. In the same period, there were similar movements in psychology, as Sigmund Freud, Konrad Lorenz, Nikolaas Tinbergen, and Jean Piaget also tried to imagine advanced architectures for 'mental computation.' In the same period, in neurology, there were my own early mentors—Nicholas Rashevsky, Warren McCulloch and Walter Pitts, Norbert Wiener, and their followers—and all those new ideas began to coalesce under the name 'cybernetics.' Unfortunately, that new domain was mainly dominated by continuous mathematics and feedback theory. This made cybernetics slow to evolve more symbolic computational viewpoints, and the new field of Artificial Intelligence headed off to develop distinctly different kinds of psychological models.

JB: Gregory Bateson once said to me that the cybernetic idea was the most important idea since Jesus Christ.

MINSKY: Well, surely it was extremely important in an evolutionary way. Cybernetics developed many ideas that were powerful enough to challenge the religious and vitalistic traditions that had for so long protected us from changing how we viewed ourselves. These changes were so radical as to undermine cybernetics itself. So much so that the next generation of computational pioneers—the ones who aimed more purposefully toward Artificial Intelligence—set much of cybernetics aside.

Let's get back to those suitcase-words (like intuition or consciousness) that all of us use to encapsulate our jumbled ideas about our minds. We use those words as suitcases in which to contain all sorts of mysteries that we can't yet explain. This in turn leads us to regard these as though they were "things" with no structures to analyze. I think this is what leads so many of us to the dogma of dualism—the idea that 'subjective' matters lie in a realm that experimental science can never reach. Many philosophers, even today, hold the strange idea that there could be a machine that works and behaves just like a brain, yet does not experience consciousness. If that were the case, then this would imply that subjective feelings do not result from the processes that occur inside brains. Therefore (so the argument goes) a feeling must be a nonphysical thing that has no causes or consequences. Surely, no such thing could ever be explained!

The first thing wrong with this "argument" is that it starts by assuming what it's trying to prove. Could there actually exist a machine that is physically just like a person, but has none of that person's feelings? "Surely so," some philosophers say. "Given that feelings cannot not be physically detected, then it is 'logically possible' that some people have none." I regret to say that almost every student confronted with this can find no good reason to dissent. "Yes," they agree. "Obviously that is logically possible. Although it seems implausible, there's no way that it could be disproved."

The next thing wrong is the unsupported assumption that this is even "logically possible." To be sure of that, you'd need to have proved that no sound materialistic theory could correctly explain how a brain could produce the processes that we call "subjective experience." But again, that's just what we were trying to prove. What do those philosophers say when confronted by this argument? They usually answer with statements like this: "I just can't imagine how any theory could do that." That fallacy deserves a name—something like "incompetentium".

Another reason often claimed to show that consciousness can't be explained is that the sense of experience is 'irreducible.' "Experience is all or none. You either have it or you don't—and there can't be anything in between. It's an elemental attribute of mind—so it has no structure to analyze."

There are two quite different reasons why "something" might seem hard to explain. One is that it appears to be elementary and irreducible—as seemed Gravity before Einstein found his new way to look at it. The opposite case is when the 'thing' is so much more complicated than you imagine it is, that you just don't see any way to begin to describe it. This, I maintain, is why consciousness seems so mysterious. It is not that there's one basic and inexplicable essence there. Instead, it's precisely the opposite. Consciousness, instead, is an enormous suitcase that contains perhaps 40 or 50 different mechanisms that are involved in a huge network of intricate interactions. The brain, after all, is built by processes that involve the activities of several tens of thousands of genes. A human brain contains several hundred different sub-organs, each of which does somewhat different things. To assert that any function of such a large system is irreducible seems irresponsible—until you're in a position to claim that you understand that system. We certainly don't understand it all now. We probably need several hundred new ideas—and we can't learn much from those who give up. We'd do better to get back to work.

Why do so many philosophers insist that "subjective experience is irreducible"? Because, I suppose, like you and me, they can look at an object and "instantly know" what it is. When I look at you, I sense no intervening processes. I seem to "see" you instantly. The same for almost every word you say: I instantly seem to know what it means. When I touch your hand, you "feel it directly." It all seems so basic and immediate that there seems no room for analysis. The feelings of being seem so direct that there seems to be nothing to be explained. I think this is what leads those philosophers to believe that the connections between seeing and feeling must be inexplicable. Of course we know from neurology that there are dozens of processes that intervene between the retinal image and the structures that our brains then build to represent what we think we see. That idea of a separate world for 'subjective experience' is just an excuse for the shameful fact that we don't have adequate theories of how our brains work. This is partly because those brains have evolved without developing good representations of those processes. Indeed, there probably are good evolutionary reasons why we did not evolve machinery for accurate "insights" about ourselves. Our most powerful ways to solve problems involve highly serial processes—and if these had evolved to depend on correct representations of how they, themselves work, our ancestors would have thought too slowly to survive.

JB: Let's talk about waht you are calling "resourcefulness."

MINSKY: Our old ideas about our minds have led us all to think about the wrong problems. We shouldn't be so involved with those old suitcase-ideas like consciousness and subjective experience. It seems to me that our first priority should be to understand "what makes human thought so resourceful". That's what my new book, The Emotional Machine is about.

If an animal has only one way to do something, then it will die if it gets in the wrong environment. But people rarely get totally stuck. We never crash like computers do. If what you're trying to do doesn't work, then you find another way. If you're thinking about a telephone, you represent it inside your brain in perhaps a dozen different ways. I'll bet that some of those representational schemes are built into us genetically. For example, I suspect that we're born with generic ways to represent things geometrically—so that we can think of the telephone as a dumbbell shaped thing. But we probably also have other brain-structures that represent those objects' functions instead of their shapes. This makes it easier to learn that you talk into at one end of that dumbbell, and listen to the other end. We also have ways to represent s in terms of the goals that they serve—which makes it easier to learn that a telephone is good to use to talk to somebody far away. The ability to use a telephone really is immensely complicated; physically you must know those functional things such as how to put the microphone part close to your mouth and the earphone near your ear. This in turn requires you to have representations of the relations between your own body parts. Also, to converse with someone effectively you need ways to represent your listener's mind. In particular; you have to know which knowledge is private and which belongs to that great body of 'public knowledge' that we sometimes call "plain common sense." Everyone knows that you see, hear and speak with your eyes, ears, and mouth. Without that commonsense knowledge base, you could not understand any of those structural, functional, or social meanings of that telephone. How much does a telephone cost? Where do you find or get one? When I was a child there were no phones in stores. You rented your phones from AT&T. Now you buy them like groceries.

A 'meaning' is not a simple thing. It is a complex collection of structures and processes, embedded in a huge network of other such structures and processes. The 'secret' of human resources lies in the wealth of those alternative representations. Consequently, the sorts of explanations that work so well in other areas of science and technology are not appropriate for psychology—because our minds rarely do things in only one way. Naturally, psychologists are envious of physicists, who have been so amazingly successful at using so few 'basic' laws to explain so much. So it was natural that psychologists, who could scarcely explain anything at all, became consumed with "Physics Envy." Most of them still seek that holy grail-to find some small of basic laws (of perception, cognition, or memory) with which to explain almost everything.

I'm inclined to assume just the opposite. If the problem is to explain our resourcefulness, then we shouldn't expect to find this in any small set of concise principles. Indeed, whenever I see a 'theory of knowledge' that can be explained in a few concise statements, then I assume that it's almost sure to be wrong. Otherwise, our ancestors could have discovered Relativity, when they still were like worms or anemones.

For example, how does memory work? When a student I read some psychology books that attempted to explain such things, with rules that resembled Newton's laws. But now I presume that we use, instead, hundreds of different brain centers that use different schemes to represent things in different ways. Learning is no simple thing. Most likely, we use a variety of multilevel, cache-like schemes that store information temporarily. Then other systems can searching in other parts of the brain for neural networks that are suited for longer term storage of that particular sort of knowledge. In other words 'memory' is a suitcase word that we use to describ e—or rather, to avoid describing—perhaps dozens of different phenomena.

We use 'consciousness' in many ways to speak of many different things. Were you conscious that you just smiled? Are you conscious of being here in this room? Were you conscious about what you were saying, or of how you were moving your hands? Some philosophers speak about consciousness as though some single mysterious entity connects our minds with the rest of the world. But 'consciousness' is only a name for a suitcase of methods that we use for thinking about our own minds. Inside that suitcase are assortments of things whose distinctions and differences are confused by our giving them all the same name. I suspect that these include many different processes that we use to keep track of what we've been doing and thinking—which might be the reason why we use the same word for them all. Many of them exploit the information that's held in the cache-like systems that we call short-term memories. When I ask if you're conscious of what you just did, that's almost the same as asking whether you 'remember' doing that. If you answer "yes" it must be because 'you' have access to some record of having done that. If I ask about how you did what you did, you usually cannot answer that—because the models that you make of yourself don't have access to any such memories.

Accordingly, I don't consciousness as holding one great, big, wonderful mystery. Instead it's a large collection of useful schemes that enable our resourcefulness. Any machine that can think effectively will need access to descriptions of what it's done recently, and how these relate to its various goals. For example, you'd need these to keep from getting stuck in a loop whenever you fail to solve a problem. You have to remember what you did—first so you won't just repeat it again, and then so that you can figure out just what went wrong—and accordingly alter your next attempt.

We also use 'consciousness' for all sorts of ideas about what we are. Most of these are based on old myths, superstitions, philosophies, and other acquired collections of memes. We use these in part to prevent ourselves from trying to understand how we work—and in older times that was useful because that would have been such a hopeless quest. For example, I see that lamp in this room. That perception seems utterly simple to me—so direct and immediate that the process seems quite irreducible. You just look at it and see what it is. But today we know much more about what actually happens when you see a lamp. It involves processes in many parts of the brain, and in many billions of neurons. Whatever traces those processes leave, they're not available to the rest of you. Thus, the parts of you that might try to explain why and how you do what you do, do not have good data for doing that job. When you ask yourself how you recognize things, or how you chose the words you say, you have no way to directly find out. It's as though your seeing and speaking machines were located in some unobservable place. You can only observe their external behaviors, but you have no access to their interior. This is why, I think, we so like that idea that thinking takes place in a mental world, that is separate from the world that contains our bodies and similar 'real' things. That's why most people are 'dualists.' They've never been shown good alternatives.

Now all this is about to change. In another 20 or 50 years, you'll be able to put on your head a cap that will show what every neuron is doing. (This is Dan Dennett's 'autocerebroscope.') Of course, if this were presented in too much detail, we won't be able to make sense of it. Such an instrument won't be of much use until we can also equip it with a Semantic Personalizer for translating its output into forms that are suited to your very own individual internal representations. Then, for the first time, we'll become capable of some 'genuine introspection.' For the first time we'll be really self-conscious. Only then will we be able to wean ourselves from dualism.

When nanotechnology starts to mature, then you'll be able to shop at the local mind-new store for intellectual implant-accessories. We can't yet predict what forms they will have. Some might be pills that you swallow. Some might live in the back of your neck (as in the Puppet Masters), using billions of wires inside your brain to analyze your neural activities. Finally, those devices will transmit their summaries to the most appropriate other parts of your brain. Then for the first time, we could really become 'self-conscious.' For the first time, you'll really be able to know (sometimes for better, and sometimes for worse) what actually caused you to do what you did.

In this sense of access to how we work, people are not really conscious yet, because their 'insights' are still so inaccurate. Some computer programs already keep better records of what they've been doing. However, they're not nearly as smart as we are. Computers are not so resourceful, yet. This is because those programs don't yet have good enough ways to exploit that information. It's a popular myth that consciousness is almost the same thing as thinking. Having access to information is not the same as knowing how to use it.

JB: Let's talk some more about philosophers and epistemologists.

MINSKY: 'Philosopher' is a suitcase word. We use it both for those who make new theories and for those who teach the history of old theories. We use 'philosophy' for all sorts of theories about the natures of things and minds and values and kinds of arguments. I don't much like those words because their users too often emphasize pre-scientific theories of subjects that science has already further clarified. In fairness, though, philosophy suffers from the same "receding horizon" effect that plagues researchers in Artificial Intelligence. That is, whenever one of their problems gets solved, then it is absorbed by another, more practical profession, such as physics, psychology, engineering, or computer science. So philosophers are too often seen as impractical bumblers, because of being ahead of their time, and not getting credit for previous accomplishments.

JB: Can you explain your theory of emotions?

MINSKY: People often use that word to express the idea that there is some deep and essential difference between thinking and feeling. My view is that this is a bad mistake, because emotions are not alternatives to thinking; they are simply different types of thinking. I regard each emotional state to be a different arrangement or disposition of mental resources. Each uses some different combination of techniques or strategies for thinking.

For example, when you are afraid, the parts of your mind that select your goals are biased in a particular way. They assign the highest priority to avoiding certain kinds of things. Similarly, when you're hungry, this means high priorities on food-finding goals. Also, other systems suppress some of your long-range planning mechanisms—and that might contribute to what we describe as a sense of panic or urgency. Being afraid, or being hungry, then, are particular methods of thinking. Similarly, the feeling of pain results from the engagement of certain special resources. If something happens to pinch your toe, then that part of your body gets highest priority and your paramount goal is to finding ways to get rid of that activity. Presumably each common emotion involves arousing a variety of particular processes in different brain centers. These in turn will then affect how some other mental resources will be disposed.

Especially, those emotions affect your active selections of goals and plans. When you're in pain you find it hard to work on problems that take a long time. When we try to describe how it feels to hurt, we find it hard to say anything specific about the 'sensation' itself'—and that makes it seem inexpressible. However, it's all too easy to speak about how hurting alters how you think. It's easy to carry on endlessly about your frustration by being distracted from your other goals, your concern about not getting your work done, about how this will affect your dependencies and relationships, and your worries about its impact of your other future activities, and so on.

Now, a philosophical dualist might then complain: "You've described how hurting affects your mind—but you still can't express how hurting feels." This, I maintain, is a huge mistake—that attempt to reify 'feeling' as an independent entity, with an essence that's indescribable. As I see it, feelings are not strange alien things. It is precisely those cognitive changes themselves that constitute what 'hurting' is—and this also includes all those clumsy attempts to represent and summarize those changes. The big mistake comes from looking for some single, simple, 'essence' of hurting, rather than recognizing that this is the word we use for complex rearrangement of our disposition of resources.

Of course, this runs against the grain. Usually, when we see an object or hear a word, its 'meaning' seems simple and direct. So we usually expect to be able to describe things without having to construct and describe such complicated cognitive theories. This fictitious apparent simplicity of feelings is why, I think, most philosophers have been stuck for so long — except for a few folks like Aaron Sloman, John McCarthy and Daniel Dennett. When a mental condition seems hard to describe, this could be because the subject simply is more complicated that you thought. The way to get unstuck is to describe architectures with more details. Only then can we imagine how certain situations or stimuli could lead a brain into the activities that we recognize when we feel love or fear, or pain.

JB: Let's talk about the love machine.

MINSKY: One section of The Emotional Machine is about how people acquire new kinds of goals in the context of loving attachments. It seems to me very curious that this has not been a main concern of most theories about the structures of minds. The question of how people learn high-level goals is scarcely ever mentioned at all in most books about psychology.

How does a hungry animal learn new ways to achieve its food-finding goal? Obviously, it has to explore—and when it doesn't know what to do it has to explore, it has to try experiments. If it happens to press a certain lever, and then receives a bit of food, that makes some kind of impression on it. Later, when it is hungry again it will tend to press similar levers. We could summarize this by saying that our animal has learned a new way to achieve its original goal. It has learned that a good sub-goal for finding food is to find and press such a lever.

Most behaviorists studied how an animal with a goal could learn new sub-goals for that goal. But how do we acquire those original goals? In cases like hunger, the answer is clear: such goals can be built-in genetically. But how do people acquire new goals that aren't sub-goals of other goals? What could make you adopt a new goal—if it's not to subserve some other old goal.

It seems to me that this could be based on combining these two older schemes: the "imprinting" studied by Konrad Lorenz and the Oedipus complex of Sigmund Freud. In the 1920s Lorenz demonstrated that many infant animals develop a special 'attachment' to a parent. Much earlier Freud suggested that a human infant become attached to (or enamored of) one or more special persons—usually parents or caretakers—who then serve models for that child's future values and high-level goals. Clearly Freud was basically right, but we still need to ask how that process might work. How do those values get represented or 'introjected'?

My conjecture is that this process employs an adaptation of the ancient imprinting mechanism, which first evolved mainly to promote the offspring's physical safety. The baby animal becomes disturbed when not in the presence of the parent, and this serves to make it quickly learn behavior that makes it stay close by. In humans though, it seems to me, this mechanism later became involved with two new types of learning, whose activities we recognize as emotions called pride and shame.

I maintain that the type of learning connected with pride is used to establish new high-level goals—or what we call positive values. The point is that pride is only evoked when a child is praised by the a person to whom it's attached. So it's not quite the same as conventional "positive reinforcement"—which can only reinforce sub-goals. Similarly, if a child is scolded by an attachment person, then that child's current intentions acquire the negative character of a shameful taboo.

It would take too long to tell all the details—but I'll emphasize what is different here. I had started by thinking about how to design machines that could learn both goals and sub-goals. It took me some time to see that these might need several different architectures. The old idea of conditioning, working down from goal to sub-goal, needs only a way to recognize when one fails or succeeds to reach a goal. Values, however, need something else—some external source of selection. Then I noticed that this was just what Freud had addressed, in his various theories of infant attachment, and his models of aversion and censorship. My colleagues seem startled when I mention Freud—but I see him as one of the few psychologists who failed to fall prey to Physics Envy. Unlike most of the others, Freud was willing to suppose, when it seemed necessary, that the mind is composed of more than a few processes or compartments. Instead of making desperate and futile attempts to reduce the numbers of different assumptions, he was willing instead to imagine architectures with more structure—and then to face the difficulty of understanding the relations and interconnections. I see him as a pioneer of advanced computer science, very far ahead of his time, because he had of his many ideas about representations, aliases, censors, suppressors, and about types and structures of memory.

JB: How do you see — the theories of emotions that you've pointed out — in terms of people's lives?

MINSKY: When as a young child I first heard of psychologists—those people who know how human minds work—I found this somewhat worrisome. They must be awfully powerful; they could make you do whatever they want. Of course, that turned out to be false. Instead, that fearful power resides in our politicians and preachers. If anything, understanding how emotions work makes it harder to exploit them.

In any case, I hope that it will be a good thing when we understand how our minds are built, and how they support the modes of thought that we like to call emotions. Then we'll be better able to decide what we like about them, and what we don't—and bit by bit we'll rebuild ourselves. I don't think that most people will bother with this, because they like themselves just as they are. Perhaps they are not selfish enough, or imaginative or ambitious. Myself, I don't much like how people are now. We're too shallow, slow, and ignorant. I hope that our future will lead us to ideas that we can use to improve ourselves.


"The Third Culture"
by Kevin Kelly


Kevin Kelly wrote the following essay for Science Magazine's "Essays on Science and Society", in celebration of the 150th anniversary of that publication. The second essay in the series (following "The Great Asymmetry" by Stephen Jay Gould), it appeared in the Volume 279, Number 5353 Issue of 13 February 1998, pp. 992 - 993 of Science and it is also available on the Science Online website at http://www.sciencemag.org/cgi/content/full/279/5353/992. It is published here for the third culture mail list by permission of the author.

Kevin Kelly is the executive editor of Wired and author of Out of Control: The New Biology of Machines, Social Systems and the Economic World.


"The Third Culture"

"Science" is a lofty term. The word suggests a process of uncommon rationality, inspired observation, and near-saintly tolerance for failure. More often than not, that's what we get from science. The term "science" also entails people aiming high. Science has traditionally accepted the smartest students, the most committed and self-sacrificing researchers, and the cleanest money—that is, money with the fewest political strings attached. In both theory and practice, science in this century has been perceived as a noble endeavor.

Yet science has always been a bit outside society's inner circle. The cultural center of Western civilization has pivoted around the arts, with science orbiting at a safe distance. When we say "culture," we think of books, music, or painting. Since 1937 the United States has anointed a national poet laureate but never a scientist laureate. Popular opinion has held that our era will be remembered for great art, such as jazz. Therefore, musicians are esteemed. Novelists are hip. Film directors are cool. Scientists, on the other hand, are ...nerds.

How ironic, then, that while science sat in the cultural backseat, its steady output of wonderful products—radio, TV, and computer chips—furiously bred a pop culture based on the arts. The more science succeeded in creating an intensely mediated environment, the more it receded culturally.

The only reason to drag up this old rivalry between the two cultures is that recently something surprising happened: A third culture emerged. It's hard to pinpoint exactly when it happened, but it's clear that computers had a lot to do with it. What's not clear yet is what this new culture means to the original two.

This new third culture is an offspring of science. It's a pop culture based in technology, for technology. Call it nerd culture. For the last two decades, as technology supersaturated our cultural environment, the gravity of technology simply became too hard to ignore. For this current generation of Nintendo kids, their technology is their culture. When they reached the point (as every generation of youth does) of creating the current fads, the next funny thing happened: Nerds became cool.

Nerds now grace the cover of Time and Newsweek. They are heroes in movies and Man of the Year. Indeed, more people wanna be Bill Gates than wanna be Bill Clinton. Publishers have discovered that cool nerds and cool science can sell magazines to a jaded and weary audience. Sometimes it seems as if technology itself is the star, as it is in many special-effects movies. There's jargon, too. Cultural centers radiate new language; technology is a supernova of slang and idioms swelling the English language. Nerds have contributed so many new words—most originating in science—that dictionaries can't track them fast enough.

This cultural realignment is more than the wisp of fashion, and it is more than a mere celebration of engineering. How is it different? The purpose of science is to pursue the truth of the universe. Likewise, the aim of the arts is to express the human condition. (Yes, there's plenty of overlap.) Nerd culture strays from both of these. While nerd culture deeply honors the rigor of the scientific method, its thrust is not pursuing truth, but pursuing novelty. "New," "improved," "different" are key attributes for this technological culture. At the same time, while nerd culture acknowledges the starting point of the human condition, its hope is not expression, but experience. For the new culture, a trip into virtual reality is far more significant than remembering Proust.

Outlined in the same broad strokes, we can say that the purpose of nerdism, then, is to create novelties as a means to truth and experience. In the third culture, the way to settle the question of how the mind works is to build a working mind. Scientists would measure and test a mind; artists would contemplate and abstract it. Nerds would manufacture one. Creation, rather than creativity, is the preferred mode of action. One would expect to see frenzied, messianic attempts to make stuff, to have creation race ahead of understanding, and this we see already. In the emerging nerd culture a question is framed so that the answer will usually be a new technology.

The third culture creates new tools faster than new theories, because tools lead to novel discoveries quicker than theories do. The third culture has little respect for scientific credentials because while credentials may imply greater understanding, they don't imply greater innovation. The third culture will favor the irrational if it brings options and possibilities, because new experiences trump rational proof.

If this sounds like the worst of pop science, in many ways it is. But it is also worth noting how deeply traditional science swirls through this breed. A lot of first-class peer-reviewed science supports nerdism. The term "third culture" was first coined by science historian C. P. Snow. Snow originated the concept of dueling cultures in his famous book, The Two Cultures.1

But in an overlooked second edition to the book published in 1964, he introduced the notion of a "third culture." Snow imagined a culture where literary intellectuals conversed directly with scientists. This never really happened. John Brockman, a literary agent to many bright scientists, resurrected and amended Snow's term. Brockman's third culture meant a streetwise science culture, one where working scientists communicated directly with lay people, and the lay challenged them back. This was a peerage culture, a peerage that network technology encouraged.

But the most striking aspect of this new culture was its immediacy. "Unlike previous intellectual pursuits," Brockman writes, "the achievements of the third culture are not the marginal disputes of a quarrelsome mandarin class: They will affect the lives of everybody on the planet."2

Technology is simply more relevant than footnotes.

There are other reasons why technology has seized control of the culture. First, the complexity of off-the-shelf discount computers has reached a point where we can ask interesting questions such as: What is reality? What is life? What is consciousness? and get answers we've never heard before. These questions, of course, are the same ones that natural philosophers and scientists of the first two cultures have been asking for centuries. Nerds get new answers to these ancient and compelling questions not by rehashing Plato or by carefully setting up controlled experiments but by trying to create an artificial reality, an artificial life, an artificial consciousness—and then plunging themselves into it. Despite the cartoon rendition I've just sketched, the nerd way is a third way of doing science.

Classical science is a conversation between theory and experiment. A scientist can start at either end—with theory or experiment—but progress usually demands the union of both a theory to make sense of the experiments and data to verify the theory. Technological novelties such as computer models are neither here nor there. A really good dynamic computer model—of the global atmosphere, for example—is like a theory that throws off data, or data with a built-in theory. It's easy to see why such technological worlds are regarded with such wariness by science—they seem corrupted coming and going. But in fact, these models yield a third kind of truth, an experiential synthesis—a parallel existence, so to speak. A few years ago when Tom Ray, a biologist turned nerd, created a digital habitat in a small computer and then loosed simple digital organisms in it to procreate, mutate, and evolve, he was no longer merely modeling evolution or collecting data. Instead, Ray had created a wholly new and novel example of real evolution.

That's nerd science. As models and networked simulations take on further complexity and presence, their role in science will likewise expand and the influence of their nerd creators increase.

Not the least because technological novelty is readily accessible to everyone. Any motivated 19-year-old can buy a PC that is fast enough to create something we have not seen before. The nerds who lovingly rendered the virtual dinosaurs in the movie Jurassic Park, by creating a complete muscle-clad skeleton moving beneath virtual skin, discovered a few things about dinosaur locomotion and visualized dinosaurs in motion in a way no paleontologist had done before. It is this easy, noncertified expertise and the unbelievably cheap access to increasingly powerful technology that is also driving nerd science. Thomas Edison, the founder of Science magazine, was a nerd if ever there was one. Edison—lacking any formal degree, hankering to make his own tools, and possessing a "just do it" attitude—fits the profile of a nerd. Edison held brave, if not cranky, theories, yet nothing was as valuable to him as a working "demo" of an invention. He commonly stayed up all night to hack together contraptions, powered by grand entrepreneurial visions (another hallmark of nerds), yet he didn't shirk from doing systematic scientific research. One feels certain that Edison would have been at home with computers and the Web and all the other techno-paraphernalia now crowding the labs of science.

Techno-culture is not just an American phenomenon, either. The third culture is as international as science. As large numbers of the world's population move into the global middle class, they share the ingredients needed for the third culture: science in schools; access to cheap, hi-tech goods; media saturation; and most important, familiarity with other nerds and nerd culture. I've met Polish nerds, Indian nerds, Norwegian nerds, and Brazilian nerds. Not one of them would have thought of themselves as "scientists." Yet each of them was actively engaged in the systematic discovery of our universe.

As nerds flourish, science may still not get the respect it deserves. But clearly, classical science will have to thrive in order for the third culture to thrive, since technology is so derivative of the scientific process. The question I would like to posit is: If the culture of technology should dominate our era, how do we pay attention to science? For although science may feed technology, technology is steadily changing how we do science, how we think of science, and what it means to be a scientist. Tools have always done this, but in the last few decades our tools have taken over. The status of the technologist is ascending because for now, and for the foreseeable future, we have more to learn from making new tools than we do from making new concepts or new measurements.

As the eminent physicist Freeman Dyson points out, "The effect of concept-driven revolution is to explain old things in new ways. The effect of tool-driven revolution is to discover new things that have to be explained" (p. 50 ).3 We are solidly in the tool-making era of endlessly creating new things to explain.

While science and art generate truth and beauty, technology generates opportunities: new things to explain; new ways of expression; new media of communications; and, if we are honest, new forms of destruction. Indeed, raw opportunity may be the only thing of lasting value that technology provides us.

It's not going to solve our social ills, or bring meaning to our lives. For those, we need the other two cultures. What it does bring us—and this is sufficient—are possibilities.

Technology now has its own culture, the third culture, the possibility culture, the culture of nerds—a culture that is starting to go global and mainstream simultaneously. The culture of science, so long in the shadow of the culture of art, now has another orientation to contend with, one grown from its own rib. It remains to be seen how the lofty, noble endeavor of science deals with the rogue vernacular of technology, but for the moment, the nerds of the third culture are rising.

The author is at Wired magazine, 520 3rd Street, San Francisco, CA 94107, USA. E-mail: [email protected]

NOTES

1. C. P. Snow, The Two Cultures and the Scientific Revolution (Cambridge Univ. Press, New York, 1959).

2. J. Brockman, The Third Culture (1996). Available at www.edge.org/3rd_culture/inde x.html.

3. F. Dyson, Imagined Worlds (Harvard Univ. Press, Cambridge, MA, 1997).Volume 279, Number 5353 Issue of 13 February 1998, pp. 992 - 993


THE REALITY CLUB

J.C. Herz and Reuben Hersh on Verena Huber-Dyson


From: J.C. Herz
Submitted: 2.23.98

(Verena Huber-Dyson wrote in EDGE 34:) "That practice, familiarity, experience and experimentation are important prerequisites for successful mathematical activity goes without saying. But less obvious and just as important is a tendency to "day dream", an ability to immerse oneself in contemplation oblivious of all surroundings, the way a very small child will abandon himself to his blocks. Anecdotes bearing witness to the enhancement of creative concentration by total relaxation abound, ranging from Archimedes' inspiration in a bath tub to Alfred Tarski's tales of theorems proved in a dental chair."

This leads into a set of questions about immersion and suspension of disbelief, vis a vis media. And the questions are: 1) how active must you be in the construction of experience in order to reach that level of immersion. If you're painting, if you're probing a solution space, if you're exploring "the beautiful slack," daydreaming, you are the architect of that experience, based on very little in the way of outside stimulus—you construct the experience from scratch. With a book, it's slightly less so, but the same principle holds, because a series of squiggly shapes printed on a page is really a very abstract thing, and you have to construct the words, and from there, the concepts, the images, internally. Further along the continuum, you have something like instrumental music, which is a very richly textured sensory experience but still rather abstract in that its meaning isn't specified. And then you have music with lyrics, which is still uni-sensory. And then you have audiovisual media, starting with film, which subsumes your perception by dominating the context, and television, and finally the garden variety web page, which has no cerebral legroom at all.

I would argue that the Net in its text days was closer to a book—much further up the continuum of immersiveness than the current cruise-ship buffet of HTML offerings. And a videogame is somewhere between music and film.

And 2) If a higher level of construction (which is not to say focus, but rather a kind of zen mindfulness) is necessary to bring about that kind of immersion, then how many people are going to be either capable or willing to engage in it? It takes a certain amount of intelligence and, more importantly, a certain amount of trust—in yourself, in the situation—to put yourself in that state, to be receptive to that experience, especially on a frequent and/or regular basis. There are very few people who can do that, and most of them are under the age of ten, and even those are a minority—the daydreamers—and the ones who can hold onto it are fewer still.

Which may change, actually. I think that growing up with videogames, computers, etc. extends the limits of that group outward from the people who read voraciously, draw, etc. to the ones who can't or won't necessarily "make" things but are willing to explore other kinds of imaginary spaces because they're lured in by the eye candy and the hormonal rat buttons. Any way you slice it, the hours drift by, and they become comfortable with that kind of flow.

Which leads to a third question, about the daydreamers: are they born or made.

J.C. HERZ is the author of Joystick Nation : How Videogames Ate Our Quarters, Won Our Hearts, and Rewired Our Minds and Surfing on the Internet, which was described by William Gibson as "post-geographical travel writing."


From: Reuben Hersh
Submitted: 2.27.98

Huber-Dyson's posting is impressive in several ways.

I especially liked "The positive integers are mental constructs. They are tools shaped by the use they are intended for. And through that use they take on a patina of reality."

I found another remark provocative: "Conceptual visioning is an indispensable attendant to mathematical thinking."

The rational number line vs the real number line—how do you envision the difference? Does the rational line have a lot of little holes scattered everywhere? Isn't any vision bound to be wrong and misleading somehow?

How about some of the "monster" simple groups? If there is some geometry associated with them, isn't this understood only after the group has been understood, not in the process of understanding it?

I feel that Huber-Dyson's remark is correct, yet I am unable to pin down what it is really saying.

Perhaps "visualization" doesn't necessarily mean a visual picture, but just some concrete example or interpretation to which we know how to apply some intuitive thinking.

Reuben Hersh

REUBEN HERSH is professor emeritus at the University of New Mexico, Albuquerque. He is the recipient (with Martin Davis) of the Chauvenet Prize and (with Edgar Lorch) the Ford Prize. Hersh is the author (with Philip J. Davis) of The Mathematical Experience, winner of the National Book Award in 1983 and author of the recently published What is Mathematics, Really?



Copyright ©1998 by Edge Foundation, Inc.

Back to EDGE INDEX

Home | Digerati | Third Culture | The Reality Club | Edge Foundation, Inc.

EDGE is produced by iXL, Inc.
Silicon Graphics Logo


This site sponsored in part by Silicon Graphics and is authored and served with WebFORCE® systems. For more information on VRML, see vrml.sgi.com.