The Space of Possible Minds

The Space of Possible Minds

Murray Shanahan [5.18.18]

Aaron Sloman, the British philosopher, has this great phrase: the space of possible minds. The idea is that the space of possible minds encompasses not only the biological minds that have arisen on this earth, but also extraterrestrial intelligence, and whatever forms of biological or evolved intelligence are possible but have never occurred, and artificial intelligence in the whole range of possible ways we might build AI.

I love this idea of the space of possible minds, trying to understand the structure of the space of possible minds in some kind of principled way. How is consciousness distributed through this space of possible minds? Is something that has a sufficiently high level of intelligence necessarily conscious? Is consciousness a prerequisite for human-level intelligence or general intelligence? I tend to think the answer to that is no, but it needs to be fleshed out a little bit. We need to break down the concept of consciousness into different aspects, all of which tend to occur together in humans, but can occur independently, or some subset of these can occur on its own in an artificial intelligence. Maybe we can build an AI that clearly has an awareness and understanding of the world. We very much want to say, "It's conscious of its surroundings, but it doesn't experience any emotion and is not capable of suffering." We can imagine building something that has some aspects of consciousness and lacks others.

MURRAY SHANAHAN is a professor of cognitive robotics at Imperial College London and a senior research scientist at DeepMind. Murray Shanahan's Edge Bio Page

THE SPACE OF POSSIBLE MINDS

It's interesting to think about the state of artificial intelligence research today, especially in the context of many decades of cycles of hype and disappointment. Of course, it's natural to ask ourselves whether we're in one of these phases of hype that's going to turn into an AI winter, or whether there's something a little bit more going on here.

If we think about what's generating all this hype at the moment, it's machine learning, and in particular, it's machine learning in the context of neural networks. I'm nowadays working in that area of neural networks and machine learning, but that's not really where my background is in artificial intelligence.

I grew up in the tradition of good old‑fashioned AI, symbolic artificial intelligence. My mentors were people like John McCarthy, whom I knew very well, and Bob Kowalski at Imperial College. The tradition there was symbolic artificial intelligence, so the vision was making intelligence by constructing machines that acquired representations. These representations were language-like and had a logical-like structure, and what you did with them was inference; you reasoned using these representations. This whole approach was very much critiqued by people like Rodney Brooks, who started to emphasize the issue of embodiment and whole-agent architectures. They felt that the representations were coming out of the heads of the engineers and weren't grounded in an agent that interacted with the real world and acquired data through that means.

I felt very much at the time, in the early '90s, that Brooks had a point, so I moved sideways somewhat and started working more in neuroscience, throwing away the reputation I'd built in symbolic AI. I started getting interested in neuroscience and building computer models of the brain and neural dynamics.

What's happened now is there's been this resurgence of interest in neural networks thanks to the availability of computing power, and in particular, large amounts of data. Of course, you need computing power to process the data. Some of those techniques that previously were only modestly successful, like back propagation, turned out to work well on certain applications.

Now, we're in a position where we've got this resurgence of interest in AI, this huge amount of commercial interest. We've got this peak of hype again, and people are starting to talk about the big vision of artificial general intelligence, as people often call it now, and human-level AI. People are starting to wonder again if we are going to get there. My angle on that is that when you look at the current generation of deep-learning approaches, there certainly are a number of shortcomings.

When I first started getting into AI again, I got DeepMind's DQN system—their Atari-playing system—up and running in the lab at Imperial. It's an absolutely amazing piece of software because it can learn any of these games out of the box from scratch and get to a human level, or superhuman level in many of them. It's an amazing piece of software, but when you watch it learning to play Space Invaders, you're struck by how slow it is at learning. I was sitting in front of the screen watching the little gunship at the bottom of the screen slowly flickering around for hours and hours being utterly useless. Eventually, it gets better. Then, of course, it gets really good. But it's slow. What strikes you is that the computer hasn't got any real understanding of what's going on in the game. It doesn't have the kind of understanding that we have when we play one of these games. We think of this game in terms of objects, movements, and little rules about what happens when, what follows what, what events happen, and so on.

That got me to thinking about my dark past in symbolic artificial intelligence. It got me thinking about how important many of those principles were that people like John McCarthy had drummed into our heads back in the day; for example, the idea that these sorts of language-like representations, propositional representations, have a compositional structure, so you can break them down into parts that represent different aspects of the scene and they can be recombined in many different ways. So, you've got reusable units of knowledge. That's what you need to be able to build up an actual understanding of the world around us.

The current generation of deep-learning systems, at least at that point, was nowhere near being able to do that sort of thing. I got very interested in how you could rehabilitate some of these ideas from good old-fashioned symbolic AI in a contemporary neural network context. That's the kind of thing that I've been interested in. How can we build artificial intelligence and hopefully move towards increasingly sophisticated AI by combining elements of classical symbolic AI and neural network deep learning? That's one technical question I'm very interested in.

Then there are philosophical questions that have exercised me since I was a child. These technical questions have exercised me since I was a teenager, although I didn't have the means to address them properly. The philosophical questions are all philosophy of mind questions. I'm particularly interested in consciousness. I have to preface what I'm about to say with something very important, which is I don't think anybody is about to create human-level artificial intelligence or anything where the question of consciousness is applicable to AI, yet. We're a long way from being able to do that. Of course, philosophy speculates about what's possible in principle, and I'm deeply interested in the question of whether we could ever build AI that had consciousness. If we could, what would it be like?

Aaron Sloman, the British philosopher, has this great phrase: the space of possible minds. The idea is that the space of possible minds encompasses not only the biological minds that have arisen on this earth, but also extraterrestrial intelligence, and whatever forms of biological or evolved intelligence are possible but have never occurred, and artificial intelligence in the whole range of possible ways we might build AI.

I love this idea of the space of possible minds, trying to understand the structure of the space of possible minds in some kind of principled way. How is consciousness distributed through this space of possible minds? Is something that has a sufficiently high level of intelligence necessarily conscious? Is consciousness a prerequisite for human-level intelligence or general intelligence? I tend to think the answer to that is no, but it needs to be fleshed out a little bit. We need to break down the concept of consciousness into different aspects, all of which tend to occur together in humans, but can occur independently, or some subset of these can occur on its own in an artificial intelligence. Maybe we can build an AI that clearly has an awareness and understanding of the world. We very much want to say, "It's conscious of its surroundings, but it doesn't experience any emotion and is not capable of suffering." We can imagine building something that has some aspects of consciousness and lacks others.

I'm very interested in some of these deep questions like Chalmers' so-called hard problem of consciousness: "How is it that it's possible for something that has experience to arise out of pure matter at all?" I'm interested in classical questions, essentially, the mind-body problem, in a slightly contemporary guise. How does that question arise and play out in the context of artificial intelligence? I have a very Wittgensteinian outlook on this. Intuitively, we feel that there always has to be an answer to this question of this artifact, this creature, this thing—is it like something to be that thing or not? Is it conscious or not? Intuitively, we feel that there must be a yes or no answer to that; it's not just something that we decide. Similarly, if you're looking at a painting and you ask, "Is it beautiful or not?" Then somebody can say, "Beauty is in the eye of the beholder. It's all relative, depending on your culture and where you're coming from." One person can decide one way and another person can decide another way. Consciousness, though, whether or not it's like something to be something, it doesn't seem to be in that kind of space. It seems to be the kind of thing that there must be a fact of the matter. Either it is capable of suffering or it's not.

A Wittgensteinian perspective challenges that and tries to make us rethink the very idea of consciousness, rethink it in terms of the way we use consciousness language, and undermines the idea that there has to be a fact of the matter.

When Rod Brooks launched his critique of the current methodology in artificial intelligence back in the late '80s and early '90s, many of the points that he made then are now very much mainstream and orthodox. For example, the idea that we need to deal with whole agents interacting with complex environments if we ever want to build sophisticated AI, that's become more or less an orthodoxy.

There were other aspects of his critique that are much less accepted. The more radical end of it was that he rejected the idea of representations altogether. Your two schools of AI at the time were the symbolic approach, which obviously had representation at the heart of it, and the neural network approach, which had representations as an important part of the way they were thinking as well, but it was a very different kind of representing, a distributed representation. Both of those schools of thinking thought that representation was important.

Today, with neural networks being so important, people now in the neural network community talk about representations all the time and don't feel embarrassed by that. That aspect of Brooks' critique is no longer very potent today. He himself would have backtracked away from that a bit.

I had the privilege of having breakfast with Daniel Kahneman at a conference. We were the first two to turn up to breakfast, so we sat down together and I had a chance to chat with him about his work, where he talks about system one and system two. He doesn't talk about consciousness there. He perhaps still prefers to avoid this philosophically difficult concept. There's a lot of wisdom in avoiding that term and the concept. I had this conversation with him, comparing his ideas with Bernie Baars' ideas. I have this longstanding interest in consciousness. I got particularly interested in Bernie Baars' ideas about global workspace theory. One of the leading contenders for the basis of a scientific theory of consciousness is global workspace theory. In global workspace theory, you have this clear division between conscious processing and unconscious processing. The idea is that conscious processing is mediated by this global workspace, which activates the whole brain, if we're talking in a neural context.

The idea is that when you consciously perceive something, then the influence of that stimulus that you're perceiving pervades the whole brain via the global workspace, via some kind of broadcast mechanism. That's the essence of it. Whereas, if there's unconscious processing of a stimulus by the brain, then it's just localized processing. Then you can draw up a little table of the properties of conscious processing versus unconscious processing, things like conscious processing is slow and flexible, whereas unconsciousness processing is fast but stereotypical, and so on.

This little collection of properties matches very closely with system one and system two in Daniel Kahneman's work, which I mentioned to him. He just assented immediately, saying, "Yes, they're very close ideas." He obviously knew about Bernie Baars' work and didn't have a problem with matching those two theories. I hope that I'm not misrepresenting him there.

I'm a big fan of Dan Dennett's thinking. Probably of all the philosophers around today who are working on consciousness, his views are closest to mine, I'd say. He's also very influenced by Wittgenstein. He's not a fan of the hard problem-easy problem distinction, and neither am I. It very much comes back to Wittgenstein and the idea that this hard problem-easy problem distinction is an artifact of our language; it's a manufactured philosophical problem that isn't real if you think about the things that are in front of us, which are human beings behaving in complex ways. It's only when we sit down and start to use language in a peculiar way that we start to think that there's some kind of issue here, and there's some kind of metaphysical division between inner and outer, between subjective and objective, and hard problem and easy problem. It's a very difficult territory, so a few trite sentences like that don't help very much.

When we're building artificial intelligence, it's natural to ask what intelligence is. If we want to think about human-level artificial intelligence, what do we mean by intelligence? There's this phrase artificial general intelligence, AGI, that's current, and that wasn't current twenty years ago. That little word, "general," is the critical thing. Generality is key to real intelligence. I'm happy to venture a definition of intelligence: Intelligence is the ability to solve problems and attain goals in a wide variety of environments. The key there is the variety of environments. If you have an agent that is able to deal with a completely novel, unseen type of environment and adapt itself to be able to deal with that, then that is a sign of intelligence. You can almost quantify this mathematically. Indeed, this is very much Shane Legg's definition of intelligence.

What I mean by an agent is a computer program or a robot. It could be an embodied computer program, or it might not be embodied, but it has to interact with an environment, so there has to be sensory input, and there has to be action or motor output. The agent is the bit in the middle that's deciding how to act on the basis of what it perceives.

~ ~ ~ ~

As a postdoc I went back to Imperial College and worked with Bob Kowalski. I was very interested in how you use logic to formalize aspects of common sense—how to represent actions, events, space, and things like using logical formalisms. I worked in that area for many years as a postdoc before I got a faculty position at Imperial College again. I was still working in that tradition for little while, but then gradually become disillusioned with where the logic‑based approach was going. That's when I started to move sideways a little bit.

At that point, I was in the Department of Electrical Engineering at Imperial College. One of the prominent people there was Igor Aleksander. Igor Aleksander's area was much more in neural networks. He was also at a stage in his career, sufficiently senior, where he could start thinking about things that were at that time not allowable as academic disciplines, like consciousness. I spent a lot of time talking to Igor about neural networks, about how the brain works, and about consciousness.

I started to shift my interests at that point and threw away the reputation I'd built up in this knowledge-based representation stuff. If I'd have had any sense, I'd have just plowed that furrow for the rest of my career. I got interested in the brain and then started building neural network models that embodied various ideas. I started building spiking network models to illustrate certain dynamical phenomena which I thought were important for intelligence.

It's important to not forget the history here. There are all kinds of buried treasures in history. Not knowing the history is bad. For instance, I recently got interested in some of Roger Schank's ideas—this idea of scripts, for example. He used to talk about the restaurant script, which was his classic example. Minsky—frames. Some of those concepts, they themselves got subsumed by the logic-based approach to knowledge representation, in a way, because people started to say, "Well, there's as all this mess of different pseudo-formalisms. We need to have a more theoretical approach. If you look into it, semantic nets, and frames, and scripts can all be subsumed by logic, so we ought to start with logic."

There are all kinds of buried treasures in the thinking that those people did, and the thinking that John McCarthy did, and Pat Hayes. Pat Hayes wrote this famous paper in 1969 with John McCarthy, which covered some philosophical problems from the standpoint of artificial intelligence. Pat Hayes' "Naïve Physics Manifesto," was just one of the classic papers of artificial intelligence. He wrote this in the early '80s. He set out a research agenda, which was all about trying to formalize the fundamental concepts of common sense, things like liquids, and pouring, and containers, and space and time. That was just such a classic paper. Maybe the logic-based approach to that wasn't the right one, but many of his insights can be translated into terms that are relevant today. It's important to know your history of the field.

Those early cyberneticists were interesting, like Norbert Wiener, and all these British counterparts, people like Ross Ashby and Gordon Pask, who I heard give a lecture at Imperial College when I was an undergraduate, which was utterly baffling. There were all these British cyberneticists. The Ratio Club was this intellectual club that met in London in the '50s, where Alan Turing, Jack Good, and all these people from Bletchley were thinking about artificial intelligence were in the same room as people like Ross Ashby. It was all much more mixed together in those days. Cybernetics disappeared and the whole digital approach to computing came to the fore a little bit after that. In one breath, von Neumann would have been talking about the nascence of digital-type architectures, and in another breath, he would have been talking about feedback loops. It was all part and parcel of a large intellectual pool of ideas at the time.

Of course, Rod Brooks brought some of those ideas back into fashion a bit. He was very interested in feedback loops, and in general, the feedback loop with the environment, where you have low-level sensors, some kind of simple processing, and then you have your motor output, and you can devise very simple feedback loops that can give rise to very complex emergent behavior.

William Grey Walter, of course, was one of the famous British cyberneticists who devised these early robots, these little tortoises, one of which is still exhibited in the Science Museum. Rod Brooks rehabilitated many of those ideas in the early '90s, which were then taken on board a little bit by this whole artificial life movement. Then this artificial life movement grew up, and they were thinking about those kinds of ideas as well.

I first met John McCarthy in 1988 or roundabout then. We were going to all the same little workshops about logic-based AI and common-sense reasoning. I started meeting him at all these different places around the world, as you do. I got to know him quite well. In fact, he invited me to Stanford, at his expense, or the expense of one of his grants. I had a couple of visits to Stanford, which lasted a few weeks, where I was John's guest. I knew him pretty well. He even came to my house in the UK. Certainly, he has had a big influence on me, as a role model of somebody who thinks outside the box.

What people don't appreciate is that the kinds of ideas that John had were crazy at the time. Later in his career, it was easy to see John as representing the conservative end of AI, but you have to remember that when he first put these ideas out there, he would have seemed like the crazy person. I've always felt that by abandoning John's kind of AI, which I did, I was being more John-like than the people who rigorously followed his style of AI because, in a sense, the spirit of John McCarthy is to think outside the box.

If you knew him well and you spoke to him a lot, he would always come up with a crazy example that would just shoot down your whole theory, and that you would never have thought of, some crazy example of human behavior, or cognition, or thinking, or something. It got you into this habit of thinking of the out-of-the-box examples and counterexamples and ways of thinking. John was a great inspiration for me.