Chapter 9 "INFORMATION IS SURPRISES"

Chapter 9 "INFORMATION IS SURPRISES"

Roger Schank [5.7.96]

Marvin Minsky: Roger Schank has pioneered many important ideas about how knowledge might be represented in the human mind. In the early 1970s, he developed a concept of semantics that he called "conceptual dependency," which plays an important role in my book The Society of Mind. He's also developed other paradigms, involving representing knowledge in various types of networks, scripts, and storylike forms.??

__________

ROGER SCHANK is a computer scientist and cognitive psychologist; director of the Institute for the Learning Sciences, at Northwestern University; John Evans Professor of Electrical Engineering and Computer Science, and professor of psychology and of education and social policy; author of fourteen books on creativity, learning, and artificial intelligence, including The Creative Attitude: Learning to Ask and Answer the Right Questions, with Peter Childers (1988), Dynamic Memory (1982), Tell Me A Story (1990), and The Connoisseur's Guide to the Mind (1991).

Roger Schank's Edge Bio Page


[Roger Schank:] My work is about trying to understand the nature of the human mind. In particular, I'm interested in building models of the human mind on the computer, and especially working on learning, memory, and natural-language processing. I'm interested in how people understand sentences, how they remember things, how they get reminded of one event by another, and how they learn from one experience and use it to help them in other events. Most people in the field associate me with the idea that there are mental structures called "scripts," which help you understand a sequence of events and allow you to make inferences from those events — inferences that essentially guide your plans or behavior through those events.

Information is surprises. We all expect the world to work out in certain ways, but when it does, we're bored. What makes something worth knowing is organized around the concept of expectation failure. Scripts are interesting not when they work but when they fail. When the waiter doesn't come over with the food, you have to figure out why; when the food is bad or the food is extraordinarily good, you want to figure out why. You learn something when things don't turn out the way you expected.

The most important thing to understand about the mind is that it's a learning device. We're constantly trying to learn things. When people say they're bored, what they mean is that there's nothing to learn. They get unbored fast when there's something to learn. The important thing about learning is that you can learn only at a level slightly above where you are. You have to be prepared.

My most interesting invention is probably my theory of MOPs and TOPs — memory-organization packets and theme-organization packets — which is basically about how human memory is organized: any experience you have in life is organized by some kind of conceptual index that's a characterization of the important points of the experience. What I've been trying to do is understand how memory constantly reorganizes, and I've been building things called dynamic memories. My most important work is the attempt to get computers to be reminded the way people are reminded.

I also made early contributions to the field of natural- language processing, where I went head to head with the linguists, who were working on essentially syntactical models of natural language. I was interested in conceptual models of natural language. I was interested in the question of how, when you understand a sentence, you extract meaning from that sentence independent of language.

I've gotten into lots of arguments with linguists who thought that the important question about language was its syntactic structure, its formal properties. I'm what is often referred to in the literature as "the scruffy"; I'm interested in all the complicated, un-neat phenomena there are in the human mind. I believe that the mind is a hodge-podge of a range of oddball things that cause us to be intelligent, rather than the opposing view, which is that somehow there are neat, formal principles of intelligence.

An example I used in my book Dynamic Memory is the case of the steak and the haircut. The story is that I was complaining to a friend that my wife didn't cook steak the way I liked it — she always overcooked it. My friend said, "Well, that reminds me of the time I couldn't get my hair cut as short as I wanted it, thirty years ago in England." The question I ask is, How does such reminding happen and why does it happen? The "how" is obvious. What are the connections between the steak and the haircut? If you look at it on a conceptual level, there's an identical index match: we each asked somebody who had agreed to be in a service position to perform that service, and they didn't do it the way we wanted it. There are a number of questions you can ask. First, how do we construct such indices? Obviously, my friend constructed such an index in order to find, in his own mind, the story that had the same label. Second, why do you construct them? And the answer is that you're trying to understand the universe and you need to match incoming events to past experiences. This is something I call "case-based reasoning." The idea that you would then make that match obviously has a purpose. It's not hard to understand what the purpose would be; the purpose is learning. Because how would you learn from new experiences otherwise?

The case-based-reasoning model says you process a new experience by constructing some very abstract label for it, and that label is the index into memory. Many things in memory have been labeled that way; you find them and you make comparisons, almost like a scientist, between the old experience and the new experience, to see what you can learn from the old experience to help you understand the new experience. When you finish that process, you can go back into your mind and add something that will help fix things. For example, I can imagine my friend saying, "Well, I guess that experience I had in England wasn't so unusual; there really are a lot of times when people don't do things because they think it would be too extreme." Sure enough, I go back and check with my wife, and the reason she overcooks the steak is that she thinks I want it too rare.

One of the problems we've had in AI is that in the early years — in the sixties and seventies — you could build programs that seemed pretty exciting. You could get a program to understand a sentence, or translate a sentence. Twenty years later, it's not exciting any more. You've got to build something real, and in order to build real things you have to work with real problems. Understanding how learning might take place when people are telling stories to each other; understanding how somebody might produce a sentence, or how somebody might make an inference, or how somebody might make an explanation: those kinds of things have interested me, whereas in AI your average person was much more interested in the formal properties of vision, say, or building robotic systems, or proving theorems, or things that are more logically based.

What I've learned in twenty years of work on artificial intelligence is that artificial intelligence is very hard. This may sound like a strange thing to say, but there's a sense in which you have only so many years to live, and if we're going to build very intelligent machines it may take a lot longer to do than I personally have left in life. The issue is that machines have to have a tremendous amount of knowledge, a tremendous amount of memory; the software-engineering problems are phenomenal. I'm still interested in AI as a theoretic enterprise- -I'm as interested in cognitive science and the mind as I've ever been — but since I'm a computer scientist, I like to build things that work.

One thing that's clear to me about artificial intelligence and that, curiously, a lot of the world doesn't understand is that if you're interested in intelligent entities there are no shortcuts. Everyone in the AI business, and everyone who is a viewer of AI, thinks there are going to be shortcuts. I call it the magic-bullet theory: somebody will invent a magic bullet in the garage and put it into a computer, and Presto! the computer's going to be intelligent. Journalists believe this. There are workers in AI who believe it, too; they're constantly looking for the magic bullet. But we became intelligent entities by painstakingly learning what we know, grinding it out over time. Learning about the world for a ten-year-old child is an arduous process. When you talk about how to get a machine to be intelligent, what it has to do is slowly accumulate information, and each new piece of information has to be lovingly handled in relation to the pieces already in there. Every step has to follow from every other step; everything has to be put in the right place, after the previous piece of information. If you want to get a machine to be smart, you're going to have to put into it all the facts it may need; this is the only way to give it the necessary information. It's not going to mysteriously acquire such information on its own.

You can build learning machines, and the learning machine could painstakingly try to learn, but how would it learn? It would have to read the New York Times every day. It would have to ask questions. Have conversations. The concept that machines will be intelligent without that is dead wrong. People are set up to be capable of endless information accumulation and indexing; finding information and connecting it to the next piece of information — that's all anyone is doing.

One of the most interesting issues to me today is education. I want to know how to rebuild the school system. One thing is to look at how people learn, right now, and how the schools work, right now, and see if there's any confluence. In schools today, students are made to read a lot of stuff, and they're lectured on it. Or maybe they see a movie. Then they do endless problems, then they get a multiple-choice test of a hundred questions. The schools are saying, "Memorize all this. We're going to teach you how to memorize. Practice it, we'll drill you on it, and then we're going to test you."

Imagine that this is how I'm going to teach you about food and wine. We're going to read about food and wine, and then I'll show you films about food and wine, and then I'll let you solve problems about the nature of food and wine, like how to decant a bottle of wine, what the optimal color is for a Bordeaux, and so forth. And then I'll give you a test.

Would you learn to appreciate food and wine this way? Would you learn anything about food and wine? The answer is no. Because what you have to do to learn about food and wine is eat and drink. Memorizing all the rules, or discussing the principles of cooking, isn't going to do any good if you don't eat and drink. In fact, it works the other way around. If you eat and drink a lot, I can get you interested in those subjects. Otherwise I can't.

Everything they teach in school is oriented so that they can test it to show that you know it, instead of taking note of the obvious, which is that people learn by doing what people want to do. The more they do, the more curious they get about how to do it better — if they're interested in doing it in the first place. You wouldn't teach a kid to drive by giving him the New York State test manual. If you want to learn how to drive, you have to drive a lot. Most schools do everything but allow kids to experience life. If kids want to learn about what goes on in the real world, they have to go out into the real world, play some role in it, and have that motivate learning. Errors in learning by doing bring out questions, and questions bring out answers.

What kids learn in high school or college is antilearning. By reading Dickens in ninth grade, I learned to hate Dickens. Ten years later, I picked up Dickens and it was interesting, because I was ready to read it. What I learned in high school was something useless — that Dickens is awful. A ninth-grade kid isn't ready for this. Why do they teach it? Because in the nineteenth century that was the literature of the time, and that's when they designed the curriculum still used in practically all schools today.

I don't think there should be a curriculum. What kids should do is follow the interests they have, with an educated advisor available to answer their questions and guide them to topics that follow from the original interest. Wherever you start, you can go somewhere else naturally. The problem is that schools want everyone to be in lockstep: everyone has to learn this on this day and that on that day. School is a wonderful baby-sitter. It lets the parents go to work and keeps the kids from killing each other.

Learning takes place outside of school, not in school, and kids who want to know something have to find out for themselves by asking questions, by finding sources of material, and by discounting anything they learned in school as being irrelevant.

Most teachers feel threatened by questions. Obviously, good teachers love to hear good questions, but the demographics don't allow them to answer all the questions anyway. This is where computers can come in. One-on-one teaching is what matters. In the old days, rich people hired tutors for their kids. The kids had one-on-one teaching, and it worked. Computers are the potential savior of the school system, because they allow one-on- one teaching. Unfortunately, every piece of educational software you see on the market today is stupid, because it was designed to follow the same old curriculum.

At the Institute for the Learning Sciences, at Northwestern, we designed a new computer program to teach biology, in which you get to design your own animal. The National Science Foundation said that this program wouldn't fit into the curriculum, because biology isn't taught in the sixth grade, which is the level at which the program works. Furthermore, since each kid would have a different conversation with the computer, how could tests be given on what was learned?

The real problem is the idea that knowledge is represented as a set of facts. It's not. You might want to know those facts, but it's not the knowing of the facts that's important. It's how you got that knowledge, the things you picked up on the way to getting that knowledge, what motivated the learning of that knowledge. Otherwise what you're learning is just an unrelated set of facts. Knowledge is an integrated phenomenon; every piece of knowledge depends on every other one. School has to be completely redesigned in order to be able to make this happen.

This is where the computer comes in, through computer programs that are knowledgeable and can have conversations with kids about whatever subject the kids want to talk about. Kids can begin to have conversations about biology or history or whatever, and have their interest sustained. What you need are computer programs that can do the kind of one-on-one teaching that a good teacher could do if he or she had the time to do it.

Not long ago, to prepare for a conference, I read Darwin. Doing this reaffirmed my belief in not reading, because if I had read Darwin at any other time in my life I wouldn't have understood him. I was only capable of understanding Darwin in a meaningful way by reading him this time, because I understood something about what his argument was with respect to arguments I was trying to make. I could internalize it. Darwin's very clever. He said all kinds of interesting things that I wouldn't have regarded as relevant twenty years ago.

The issue is reading when you're prepared to read something. For instance, at this moment I'm not thinking about consciousness, so if I read Dan Dennett, he would do one of two things to me. He would cause me to react to his thinking about consciousness, which means that I would forever think about consciousness in his metaphor. This is useless to me, if I want to be creative. Secondly, I would reject his theories out of hand and find the book and the subject not worth thinking about. This also is bad. I don't see the point of reading his book unless at this moment I've thought about consciousness and am prepared to see what he thinks. That's my view of reading. The problem is that intellectuals say to each other, "Oh my God, haven't you read X?" It's academic one-upmanship.

The MIT linguist Noam Chomsky represents everything that's bad about academics. He was my serious enemy. It was such an emotional topic for me twenty years ago that at one point I couldn't even talk about it without getting angry. I'm not sure I'm over that. I don't like his intolerant attitude or what I consider tactics that are nothing less than intellectual dirty tricks. Chomsky was the great hero of linguistics. In his view of language, the core of the issue is syntax. Linguistics is all about the study of syntax. Language should be looked at in terms of Chomsky's notion of its "deep structure." Part of Chomsky's cleverness in referring to deep structure was to use these wonderful words in a way that everyone assumed to be something other than what he meant.

What Chomsky meant by "deep structure" was that you didn't have to look at the surface structure of a sentence — the nouns and the verbs, and so forth. But what any rational human being would have thought he meant by "deep structure," he emphatically did not mean. You would imagine that a deep structure would refer to the ideas behind the sentence, the meaning behind the sentence. But Chomsky stopped people from working on meaning.

I was sufficiently out of that world so that I could yell and scream and say that meaning is the core of language. I went through every point he ever made, and made fun of each one. He was always an easy target, but he had a cadre of religious academic zealots behind him who essentially would listen to no one else.

Here's an example of an argument I might have had with him in the late sixties. The sentence "John likes books" means that John likes to read. "Oh no," Chomsky might say, "John has a relationship of liking with respect to books, but he might not like to read."

Part of what linguistic understanding is about is understanding meaning: what you can assume to be absolutely true, and what you can assume to be true some of the time, or likely to be true. I call this inference. But Chomsky would say, "No, inference has nothing to do with language, it has to do with memory, and memory has nothing to do with language."

That comment is totally absurd. The psychology of language is what's at issue here. Meaning, inferences, and memory are a very deep part of language. Chomsky explicitly states in his most important book, Aspects of the Theory of Syntax, that memory is not a part of language and that language should be studied in the abstract. Language, for Chomsky, is a formal study, the study of the mathematics of language. I can see someone making arguments about language from a perspective of mathematical theory, but not if you are a founding member of the editorial board ofCognitive Psychology, and not if legions of psychologists are writing articles and conducting experiments based upon your work. Chomsky tried to have it both ways.

In Chomsky's view, the mind should behave according to certain organized principles, otherwise he wouldn't want to study it. I don't share that view. I'll study the mind, and whatever I get is O.K. Let it all be mud. Fine, if that's what it is. There are many scientists who'd like the mind to be scientific. If it isn't scientific — neat and mathematical — they don't want to have to deal with it. Chomsky has always adopted the physicist's philosophy of science, which is that you have hypotheses you check out, and that you could be wrong. This is absolutely antithetical to the AI philosophy of science, which is much more like the way a biologist looks at the world. The biologist's philosophy of science says that human beings are what they are, you find what you find, you try to understand it, categorize it, name it, and organize it. If you build a model and it doesn't work quite right, you have to fix it. It's much more of a "discovery" view of the world, and that's why the AI people and the linguistics people haven't gotten along. AI isn't physics.

 


Back to Contents

Excerpted from The Third Culture: Beyond the Scientific Revolution by John Brockman (Simon & Schuster, 1995) . Copyright © 1995 by John Brockman. All rights reserved.