The Language of Mind

Will every possible intelligent system somehow experience itself or model itself as having a mind? Is the language of mind going to be inevitable in an AI system that has some kind of model of itself? If you’ve just got an AI system that's modeling the world and not bringing itself into the equation, then it may need the language of mind to talk about other people if it wants to model them and model itself from the third-person perspective. If we’re working towards artificial general intelligence, it's natural to have AIs with models of themselves, particularly with introspective self-models, where they can know what’s going on in some sense from the first-person perspective.

Say you do something that negatively affects an AI, something that in an ordinary human would correspond to damage and pain. Your AI is going to say, "Please don’t do that. That’s very bad." Introspectively, it’s a model that recognizes someone has caused one of those states it calls pain. Is it going to be an inevitable consequence of introspective self-models in AI that they start to model themselves as having something like consciousness? My own suspicion is that there's something about the mechanisms of self-modeling and introspection that are going to naturally lead to these intuitions, where an AI will model itself as being conscious. The next step is whether an AI of this kind is going to naturally experience consciousness as somehow puzzling, as something that potentially is hard to square with basic underlying mechanisms and hard to explain.

DAVID CHALMERS is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is best known for his work on consciousness, including his formulation of the "hard problem" of consciousness. David Chalmers's Edge Bio Page