JB: Let's talk about waht you are calling "resourcefulness."
MINSKY: Our old ideas about our minds have led us all to think about the wrong problems. We shouldn't be so involved with those old suitcase-ideas like consciousness and subjective experience. It seems to me that our first priority should be to understand "what makes human thought so resourceful". That's what my new book, The Emotional Machine is about..
If an animal has only one way to do something, then it will die if it gets in the wrong environment. But people rarely get totally stuck. We never crash like computers do. If what you're trying to do doesn't work, then you find another way. If you're thinking about a telephone, you represent it inside your brain in perhaps a dozen different ways. I'll bet that some of those representational schemes are built into us genetically. For example, I suspect that we're born with generic ways to represent things geometrically-so that we can think of the telephone as a dumbbell shaped thing. But we probably also have other brain-structures that represent those objects' functions instead of their shapes. This makes it easier to learn that you talk into at one end of that dumbbell, and listen to the other end. We also have ways to represent s in terms of the goals that they serve-which makes it easier to learn that a telephone is good to use to talk to somebody far away. The ability to use a telephone really is immensely complicated; physically you must know those functional things such as how to put the microphone part close to your mouth and the earphone near your ear. This in turn requires you to have representations of the relations between your own body parts. Also, to converse with someone effectively you need ways to represent your listener's mind. In particular; you have to know which knowledge is private and which belongs to that great body of 'public knowledge' that we sometimes call "plain common sense." Everyone knows that you see, hear and speak with your eyes, ears, and mouth. Without that commonsense knowledge base, you could not understand any of those structural, functional, or social meanings of that telephone. How much does a telephone cost? Where do you find or get one? When I was a child there were no phones in stores. You rented your phones from AT&T. Now you buy them like groceries.
A 'meaning' is not a simple thing. It is a complex collection of structures and processes, embedded in a huge network of other such structures and processes. The 'secret' of human resources lies in the wealth of those alternative representations. Consequently, the sorts of explanations that work so well in other areas of science and technology are not appropriate for psychology-because our minds rarely do things in only one way. Naturally, psychologists are envious of physicists, who have been so amazingly successful at using so few 'basic' laws to explain so much. So it was natural that psychologists, who could scarcely explain anything at all, became consumed with "Physics Envy." Most of them still seek that holy grail-to find some small of basic laws (of perception, cognition, or memory) with which to explain almost everything.
I'm inclined to assume just the opposite. If the problem is to explain our resourcefulness, then we shouldn't expect to find this in any small set of concise principles. Indeed, whenever I see a 'theory of knowledge' that can be explained in a few concise statements, then I assume that it's almost sure to be wrong. Otherwise, our ancestors could have discovered Relativity, when they still were like worms or anemones.
For example, how does memory work? When a student I read some psychology books that attempted to explain such things, with rules that resembled Newton's laws. But now I presume that we use, instead, hundreds of different brain centers that use different schemes to represent things in different ways. Learning is no simple thing. Most likely, we use a variety of multilevel, cache-like schemes that store information temporarily. Then other systems can searching in other parts of the brain for neural networks that are suited for longer term storage of that particular sort of knowledge. In other words 'memory' is a suitcase word that we use to describe-or rather, to avoid describing-perhaps dozens of different phenomena.
We use 'consciousness' in many ways to speak of many different things. Were you conscious that you just smiled? Are you conscious of being here in this room? Were you conscious about what you were saying, or of how you were moving your hands? Some philosophers speak about consciousness as though some single mysterious entity connects our minds with the rest of the world. But 'consciousness' is only a name for a suitcase of methods that we use for thinking about our own minds. Inside that suitcase are assortments of things whose distinctions and differences are confused by our giving them all the same name. I suspect that these include many different processes that we use to keep track of what we've been doing and thinking-which might be the reason why we use the same word for them all. Many of them exploit the information that's held in the cache-like systems that we call short-term memories. When I ask if you're conscious of what you just did, that's almost the same as asking whether you 'remember' doing that. If you answer "yes" it must be because 'you' have access to some record of having done that. If I ask about how you did what you did, you usually cannot answer that-because the models that you make of yourself don't have access to any such memories.
Accordingly, I don't consciousness as holding one great, big, wonderful mystery. Instead it's a large collection of useful schemes that enable our resourcefulness. Any machine that can think effectively will need access to descriptions of what it's done recently, and how these relate to its various goals. For example, you'd need these to keep from getting stuck in a loop whenever you fail to solve a problem. You have to remember what you did-first so you won't just repeat it again, and then so that you can figure out just what went wrong-and accordingly alter your next attempt.
We also use 'consciousness' for all sorts of ideas about what we are. Most of these are based on old myths, superstitions, philosophies, and other acquired collections of memes. We use these in part to prevent ourselves from trying to understand how we work-and in older times that was useful because that would have been such a hopeless quest. For example, I see that lamp in this room. That perception seems utterly simple to me-so direct and immediate that the process seems quite irreducible. You just look at it and see what it is. But today we know much more about what actually happens when you see a lamp. It involves processes in many parts of the brain, and in many billions of neurons. Whatever traces those processes leave, they're not available to the rest of you. Thus, the parts of you that might try to explain why and how you do what you do, do not have good data for doing that job. When you ask yourself how you recognize things, or how you chose the words you say, you have no way to directly find out. It's as though your seeing and speaking machines were located in some unobservable place. You can only observe their external behaviors, but you have no access to their interior. This is why, I think, we so like that idea that thinking takes place in a mental world, that is separate from the world that contains our bodies and similar 'real' things. That's why most people are 'dualists.' They've never been shown good alternatives.
Now all this is about to change. In another 20 or 50 years, you'll be able to put on your head a cap that will show what every neuron is doing. (This is Dan Dennett's 'autocerebroscope.') Of course, if this were presented in too much detail, we won't be able to make sense of it. Such an instrument won't be of much use until we can also equip it with a Semantic Personalizer for translating its output into forms that are suited to your very own individual internal representations. Then, for the first time, we'll become capable of some 'genuine introspection.' For the first time we'll be really self-conscious. Only then will we be able to wean ourselves from dualism.
When nanotechnology starts to mature, then you'll be able to shop at the local mind-new store for intellectual implant-accessories. We can't yet predict what forms they will have. Some might be pills that you swallow. Some might live in the back of your neck (as in the Puppet Masters), using billions of wires inside your brain to analyze your neural activities. Finally, those devices will transmit their summaries to the most appropriate other parts of your brain. Then for the first time, we could really become 'self-conscious.' For the first time, you'll really be able to know (sometimes for better, and sometimes for worse) what actually caused you to do what you did.
In this sense of access to how we work, people are not really conscious yet, because their 'insights' are still so inaccurate. Some computer programs already keep better records of what they've been doing. However, they're not nearly as smart as we are. Computers are not so resourceful, yet. This is because those programs don't yet have good enough ways to exploit that information. It's a popular myth that consciousness is almost the same thing as thinking. Having access to information is not the same as knowing how to use it.