Meta-thinking

By any reasonable definition of "thinking," I suspect that computers do indeed think. But if computers think, then thinking isn't the unique province of human beings. Is there something else about humans that makes us unique?

Some people would say that what makes human beings unique is the fact that they partake in some sort of divine essence. That may be true, but it's not terribly informative. If we met an intelligent alien species, how would we decide whether they also have this je ne sais quoi that makes a person? Can we say something more informative about the unique features of persons?

What sets human beings apart from the current generation of thinking machines is that humans are capable of thinking about thinking, and of rejecting their current way of thinking if it isn't working for them.

The most striking example of humans thinking about their own thinking was the discovery of logic by the Stoics and Aristotle. These Greek philosophers asked: What are the rules that we're supposed to follow when we are thinking well? It's no accident that 20th century developments in symbolic logic led to the invention of thinking machines, i.e. computers. Once we became aware of the rules of thinking, it was only a matter of time before we figured out how to make pieces of inanimate matter follow these rules.

Can we take these developments a step further? Can we construct machines that not only think, but that engage in "meta-thought," i.e. thinking about thinking? One intriguing possibility is that for a machine to think about thinking, it will need to have something like free will. And another intriguing possibility is that we are on the verge of constructing machines with free will, namely quantum computers.

What exactly is involved in meta-thought? I'll illustrate the idea from the point of view of symbolic logic. In symbolic logic, a "theory" consists of a language L and some rules R that stipulate which sentences can be deduced from which others. There are then two completely distinct activities that one can engage in. On the one hand, one can reason "within the system," e.g. by writing proofs in the language L, using the rules R. (Existing computers do precisely this: they think within a system.) On the other hand, one can reason "about the system," e.g. by asking whether there are enough rules to deduce all logical consequences of the theory. This latter activity is typically called meta-logic, and is a paradigm instance of meta-thought. It is thinking about the system as opposed to within the system.

But I'm interested in yet another instance of meta-thought: if you've adopted a theory, then you've adopted a language and some deduction rules. But you're free to abandon that language or those rules, if you think that a different theory would suit your purposes better. We haven't yet built a machine that can do this sort of thing, i.e. evaluate and choose among systems. Why not? Perhaps choosing between systems requires free will, emotions, goals, or other things that aren’t intrinsic to intelligence per se. Perhaps these further abilities are something that we don’t have the power to confer on inanimate matter.