The more we learn about cognition, the stronger becomes the case for understanding human thinking as the nexus of several factors, as the emergent property of the interaction of the human body, human emotions, culture, and the specialized capacities of the entire brain. One of the greatest errors of Western philosophy was to buy into the Cartesian dualism of the famous statement, "I think, therefore I am." It is no less true to say "I burn calories, therefore I am." Even better would be to say "I have a human evolutionary history, therefore I can think about the fact that I am."
The mind is never more than a placeholder for things we do not understand about how we think. The more we use the solitary term "mind" to refer to human thinking, the more we underscore our lack of understanding. At least this is an emerging view of many researchers in fields as varied as Neuroanthropology, emotions research, Embodied Cognition, Radical Embodied Cognition, Dual Inheritance Theory, Epigenetics, Neurophilosophy, and the theory of culture.
For example, in laboratory of Professor Martin Fischer at the University of Potsdam, extremely interesting research is being done on the connection of the body and mathematical reasoning. Stephen Levinson's group at the Max Planck Institute for Psycholinguistics in Nijmegen has shown how culture can affect navigational abilities—a vital cognition function of most species. In my own research, I am looking at the influence of culture on the formation of what I refer to as "dark matter of the mind," a set of knowledges, orientations, biases, and patterns of thought that affect our cognition profoundly and pervasively.
If human cognition is indeed a property that emerges from the intersection of our physical, social, emotional, and data-processing abilities, then intelligence as we know it in humans is almost entirely unrelated from "intelligence" devoid of these properties.
I believe in "Artificial Intelligence" so long as we realize it is artificial. Comparing computation problem-solving, chess-playing, "reasoning," and so on to humans is like comparing the flight of an Airbus 320 to an eagle's. It is true that they both temporarily defy the pull of gravity, that they are both subject to the physics of the world in which they operate, and so on, but the similarities end there. Bird flight and airplane flight should not be confused.
The reasons that artificial intelligence is not real intelligence are many. First there is meaning. Some have claimed to have solved this problem, but they haven't really. This "semantics problem" is, as John Searle pointed out years ago, why a computer running a translation program converting English into Mandarin speaks neither English nor Mandarin. There is no computer that can learn a human language, only bits and combinatorics for special purposes. Second, there is the problem of what Searle called "the background" and what I refer to as "dark matter," or what some philosophers intend by "tacit knowledge."
We learn to reason in a cultural context, where by culture I mean a system of violable, ranked values, hierarchically structured knowledges, and social roles. We are able to do this not only because we have an amazing ability to perform what appears to be Bayesian inferencing across our experiences, but because of our emotions, our sensations, our proprioception, and our strong social ties. There is no computer with cousins and opinions about them.
Computers may be able to solve a lot of problems. But they cannot love. They cannot urinate. They cannot form social bonds because they are emotionally driven to do so. They have no romance. The popular idea that we may be some day able to upload our memories to the Internet and live forever is silly—we would need to upload our bodies as well. The idea that comes up in discussions about Artificial Intelligence that we should fear that machines will control us is but a continuation of the idea of the religious "soul," cloaked in scientific jargon. It detracts from real understanding.
Of course, one ought never to say what science cannot do. Artificial Intelligence may one day become less artificial by recreating bodies, emotions, social roles, values, and so on. But until it does, it will still be useful for vacuum cleaners, calculators, and cute little robots that talk in limited, trivial ways.