The Mind Bleeds Into The World
I’ve been thinking a lot about the impact of technology on philosophy, and how technology can illuminate or sometimes even transform philosophical questions. These are exciting times right now in technology, with massive advances the last few years in artificial intelligence and virtual reality that’s got them in use on a wider scale than ever. Both of these technologies raise very deep philosophical issues. What’s artificial intelligence? That’s an artificial mind. What’s virtual reality? That’s an artificial world. This is great for a philosopher because philosophy, as I see it, is all about thinking about the nature of the mind, the nature of the world, and the connection between them. Thinking about artificial minds and artificial worlds can shed a lot of light on the mind and the world more generally.
I’ve thought a lot about the mind and consciousness, and when you’re doing that it becomes very natural to think about artificial intelligence. Could a computer have a mind? Could it be conscious? What kind of mind would it have? I’ve also thought a lot about technology augmenting the mind—like smartphones as extensions of the mind. Thinking about those questions about technology has helped philosophers get clearer on traditional questions about just what it is to have a mind.
Lately, I’ve been getting especially interested in questions about the world and about artificial worlds. It turns out that thinking about artificial worlds can help to think about many of the central questions in philosophy—the nature of reality, our knowledge of the external world, the existence of god, the mind-body problem, even the meaningfulness of life.
Virtual reality is basically a technology for creating and interacting with artificial worlds. As I understand it, it is an environment that is generated by a computer, which you look at through a headset that gives you an immersive interaction with a computer-generated artificial world. A lot of virtual reality devices have been released lately. I’ve set up a room in my house with an Oculus Rift and an HTC Vive, as well as Google Cardboard and Samsung Gear VR, and I’ve been having fun playing around with them.
The expression “virtual reality” was made widespread by Jaron Lanier and others back in the 1980s as this technology started to be developed. The term was first put forward in the 1930s by the French theater scholar Antonin Artaud, who talked about the theater as a kind of virtual reality—réalité virtuelle. My apologies for my French accent. He saw the theater as a way of generating an alternative kind of reality here with human actors on a stage that would give an immersive experience for those people watching it.
These days we use the term “virtual reality” more specifically to describe an environment generated by a computer. A virtual reality environment is computer generated and immersive. It’s as if we’re immersed in a three-dimensional world that is interactive. We can interact with that environment: It makes a difference to us and we make a difference to it.
Full-scale virtual reality is an immersive, interactive, computer-generated environment. Ordinary physical reality meets two of those three conditions: it’s immersive—it feels like I’m in the middle of it—and it’s interactive. I’m interacting with it, but it’s presumably not computer generated, so it only meets two out of three. An ordinary movie meets none of the conditions, a computer-generated movie meets one of them, an interactive videogame on a desktop computer meets two of them, and virtual reality meets all three. Virtual reality takes the immersiveness and interactiveness of everyday reality and brings in the role of the computer in generating this reality artificially. So think of it, if you like, as an artificial world.
All this just raises any number of questions, prime among them being, is the virtual reality you’re interacting with a genuine reality or is it some kind of illusion, some kind of fiction?
There’s a long history of people saying that virtual reality is a kind of second-class reality. William Gibson, in Neuromancer, said, “Cyberspace,” meaning virtual reality, “is a consensual hallucination.” It's a hallucinated reality, something that doesn’t even exist. You watch a movie like The Matrix and it’s portrayed as if it’s all an illusion, as if it's not real at all.
I think this is the wrong way to think about virtual reality. A virtual world is just as real as a physical world. When one inhabits a virtual reality, when one puts on an Oculus Rift or even perhaps when one is playing a non-immersive video game like World of Warcraft, one is interacting with a genuine digital reality. When one sees one’s avatar in virtual reality, that’s a real body, it’s just a digital body. When one is interacting with other people in an environment, like Second Life, for example, you and the other people in that world are inhabiting a genuine digital world and there’s nothing fictional about it. It exists perfectly objectively in the circuitry of a computer system. It’s a digital reality, to be sure. It may be a different kind of reality from the physical reality that we’re used to, but it’s perfectly real. There needn’t be any illusion involved when we’re using virtual reality. We perceive virtual objects, and those objects really exist.
This is interesting for a philosopher because it goes back to a thought experiment put forward by René Descartes back in the 17th century, in his Meditations on First Philosophy. This is the thought experiment of the evil demon. How do you know that an evil demon isn’t fooling you into thinking that there’s a world out there of objects, and things, and trees, and people when none of it exists? And we’re supposed to think that everything we’re seeing could be an illusion.
The modern version of that is, how do know you’re not in a simulation right now? How do you know you’re not in a simulated reality where none of this exists? How do you know you’re not in a virtual reality? Maybe I’ve been living in a virtual reality my whole life, in which case we’re supposed to conclude that all this is illusory. None of it exists at all.
I’m inclined to think this is, again, the wrong way to think about virtual reality or simulations. Simulated worlds are perfectly real worlds, they’re just digital worlds—worlds fundamentally grounded in information. If it turns out, for example, we’re living in a digital virtual reality, we shouldn’t say that none of the stuff around us exists; we should say that we’re living in a world where everything is grounded in information.
It’s a little bit like Wheeler’s famous “It from bit” hypothesis. All of reality is grounded in information: Wheeler put that forward as a hypothesis about physics, that physics might be grounded in the interplay of information. But this was meant to be a hypothesis about reality. It wasn’t one that made reality some kind of illusion. Physics is still real. It’s grounded in information. In my view, that’s the right way to think about virtual reality. If we are in a virtual reality, we’re living in an “It from bit” universe where all this is grounded in information. Of course that information might itself be grounded in processes in a computer in the next universe up. We’ll have many levels of reality, but all of it is real.
I think one can use these ideas to build at least a partial answer to Descartes’ challenge about our knowledge of the external world. Even if we are in a virtual reality, the objects around us are still real. There are still tables and chairs and so on. They’re just computational entities on a lower level, just as they’re quantum mechanical at another level. Even if our impressions of the world are produced by an evil demon, the evil demon will have a complicated model of the world in his mind, processing a lot of information. Then the tables and chairs in our environment are still real. It’s just that at another level they’re made of information in the evil demon’s mind.
More fundamentally, patterns in our experiences give us good reason to believe that a certain sort of abstract information structure is present in the external world. Experience doesn’t tell us whether that structure is embodied in a computer, in an evil demon’s mind, or in a regular nonvirtual physical world. But that doesn’t matter. If I’m right, that structure alone is enough to ensure that the entities we perceive in the external world exist. Whether the world is virtual or nonvirtual, there are really tables and chairs, trees and mountains, particles and people. That’s enough to at least partially answer Descartes.
You might think that a virtual reality would be too insubstantial to be a genuine reality. If we're in a virtual reality, objects are not solid the way things seem to be. But we know from physics that objects are mostly empty space. What makes them some of them count as solid is the way they interact with each other. And that pattern of interaction can be present in a virtual reality.
You might also worry about space. Some people think that if we're in a virtual reality, objects aren't spread out in space the way they seem to be. But relativity, quantum mechanics, and other more recent theories increasingly suggest that nothing fully satisfies our intuitive conception of space as a sort of primitive container of matter. Like solidity, space is grounded in the way things interact with each other.
A slogan I like here is "Distance is what there's no action at". Or better, "Distance is what there's less action at". What makes certain objects count as being spatially close is that they have certain possibilities for interacting. That way of thinking about space helps us to see how space can be present in a quantum mechanical world or a string theoretical world or an it-from-bit world. It also helps us to see how space can be present in a virtual world. Virtual objects have certain possibilities for motion and interaction, and this grounds a sort of space that they inhabit. It's not exactly the same as nonvirtual space, but it's still a genuine sort of space.
I think of this view as a sort of functionalism or structuralism about space, solidity, and other aspects of the physical world. What matters for a physical world is the pattern of interactions. This view helps to make sense of how quantum mechanics and other physical theories support the physical world we experience. It also helps to make sense of virtual reality.
Now some people, of course, have speculated that we ourselves may be living in a virtual reality, that our own environment may be virtual. This is the hypothesis that we are living in a computer simulation and have been since the beginning. That’s the hypothesis made famous by movies like The Matrix. More recently, the philosopher Nick Bostrom has given some reasons that maybe we ought to take this hypothesis seriously.
Simulation technology is getting better and better. We’re going to, presumably, generate more and more complex simulated worlds as our technology goes on, and you might well expect that any intelligent civilization will end up producing multiple simulated universes. There’s probably going to be many more simulated universes than unsimulated universes, many more simulated beings than unsimulated beings, in which case, Bostrom asks, what are the odds that we are one of the lucky ones? The few unsimulated beings at level zero? If unsimulated beings are a minority, it's more likely that we’re one of the simulated beings way down there. Now there are various ways that reasoning can go wrong, but at the very least it gives us some reason to take seriously the hypothesis that we could be inhabiting a virtual reality and to think about what follows.
You can see the simulation hypothesis, the hypothesis that we’re in a simulation as a version of the multiverse idea, that there are multiple universes coexisting. Our universe would be a local universe that is a simulation, and presumably whoever created this simulation created other simulations, and maybe they created a million simulations overnight with slightly different parameters to watch them run and see what happens. So there’s a multiverse—a million simulated universes right there all contained within another universe at the next level up. And who’s to say, of course, that that universe is not simulated, too? This gives you a branching multiverse structure of universes within universes within universes. Maybe there’s a top-level universe that is not simulated, but every level beneath will be simulations within simulations within simulations.
Maybe there’s a theology here. Whoever created our universe—the simulator—in a way, that’s our god, our creator. This being might be all-powerful, able to control our universe, all-knowing. Some people have even proposed that we should be forming religions around the idea that our simulator is our god.
I’m not a religious person myself, but there’s something in this idea. This being would count as a creator; it would be all-powerful, all-knowing. But at the same time this would all be naturally explicable in terms of, say, the laws of physics of the next universe up. There’d be nothing supernatural about it. To be a religion, there’s got to be something like worship or some deep spirituality involved. We might have respect, maybe even a kind of awe, for a being who creates our universe by a simulation, in virtue of knowing everything and being all-powerful. But I’m not going to worship that being. Who knows? Maybe it’s just a teenage hacker in the next universe up that’s created this universe for fun. Great, I’m glad that I exist, so thank you, but I refuse to worship you.
I’m an atheist for a number of reasons, but the most fundamental reason I have for being an atheist is I cannot imagine a being that would be worthy of worship. It just seems an inappropriate attitude to have towards any being. So even if there is a locally all-powerful creator, wonderful, but I’m not going to erect a religion around you. I’d be inclined to say, at best, these beings—our simulators—might be gods with a lower case g, not Gods with a capital G.
Of course, the question arises at the next universe up, at the next level up: Who created them? Did someone create the whole system of universes? And I see no particular reason to think so. Maybe if you wanted the genuine traditional god, you’d need someone who created the whole multiverse rather than just a local god who created this local universe. As far as I can tell there’s no particular reason to think there has to be some god responsible for creating the whole multiverse. The chances are the god that you’re going to get from this simulation theology is an extremely watered-down god that would probably seem somewhat blasphemous from the perspective of traditional religion.
People will make religions out of anything. There is a strong religious imperative that you find in many people, which I don’t share myself. It may well be that some people are going to want to make religion out of the idea of a simulation and that within decades we’re going to have simulation theology and simulation religion all around us, people praying to their great god, The Simulator, The Creator. That would be a terrible mistake. I don’t think anyone ought to be making a religion out of this. I don’t think anyone ought to be forming their moral code based on the idea that we’re in a simulation.
At the same time, the simulation idea is extremely interesting because of the possibility of giving even a watered-down version of the traditional theistic hypothesis of God in a framework that seems compatible with science. I don’t want to say this is a scientific idea in the sense that it’s one that science has demonstrated or even might demonstrate. It could well be that we’re in a simulation. Maybe we’ll never get proof of this. Or if we’re not maybe we also can’t get proof. So I don’t want to say we’re exactly in the territory of a scientific hypothesis. This is more of a philosophical hypothesis.
Interestingly, it is a version of god that is consistent with science in a way that you might have thought many other versions of science somehow require positing something that goes beyond our general scientific picture of the world: spirits, or forces, or all goodness and love. Well, the simulation hypothesis just doesn’t do that. All we need is physics in the next universe up. All we need is technology. Though we already understand the technology of creating artificial worlds, there are probably going to be artificial worlds, then we just change perspective and say maybe we’re there and this could be our situation; we are created in scientifically explicable way.
Again, don’t make a religion of that, but you get some components of the traditional religious view: a creator with huge power and huge knowledge of our world. Some people might speculate on the possibilities of an afterlife: If we are ultimately code, then that code could be uploaded into a different environment, maybe into the environment of the next universe up. Think of it as a science-compatible version of ordinary religion.
Some people do see virtual reality as a way of living forever, at least in a virtual reality. The wonderful TV show Black Mirror had an episode recently where the main characters get to upload themselves into a virtual reality environment and live forever in a kind of a heaven. It was all very idyllic. They live on a beach with this gauzy music and drive around in their sports car. I’ve got a feeling that this heaven is going to get boring after a year or two and they’re going to wish for a somewhat more substantial reality to live in.
It may well be that in the future we develop virtual realities for us to hang out in that are as substantive and as interesting as genuine reality, and many people may choose to spend their lives inhabiting those virtual realities.
What is missing, if anything, in a simulated world? You could recreate all the causal structure of a world, in a simulated world, and connect it up to a conscious human being who interacts with it. Is there something fundamental that’s missing? Some people think life in that kind of virtual simulated reality would be meaningless, would be valueless.
In fact, the philosopher Robert Nozick had a thought experiment about this. The case of the experience machine, where you had the option of entering into a machine for the rest of your life that would simulate your world and produce all kinds of wonderful experiences where you were successful, had wonderful friends, and became the world champion of whatever, but it was all preprogrammed and simulated. Nozick said, “You would not choose to enter that world. I would not choose to enter that world.” And many people have taken that to be a reason to think that virtual reality is somehow second class, at least in the sense of being valueless, meaningless, not a way in which you could live a fulfilling life.
I think Nozick was wrong about that, or at least his reasoning does not support this conclusion about virtual reality being meaningless. One thing you do get out of Nozick’s thought experiment is that living in a preprogrammed reality in which everything that happens to you is foreordained by certain creators is basically, if not meaningless, it lacks much of what we find meaningful in our lives, which is the challenge of living our own lives, creating our lives, overcoming obstacles and creating our own destiny. A preprogrammed environment won’t give us that.
But importantly, there’s no reason why a virtual reality has to be preprogrammed. In fact, part of the very definition of virtual reality that I gave before is it’s interactive. You, the person at the center inhabiting a virtual reality, get control. You still get to think. You get to control your body. You get to interact with others. You have as much free will in a virtual reality, in principle at least, as you do in a non-virtual reality.
So as far as I can tell, preprogramming is not a count against virtual reality. Nozick also worried that virtual reality is illusory, and that we don’t really have a body and so on. I’ve already suggested that that’s wrong. In virtual reality, we have a real virtual body, which isn’t a illusion. In principle a virtual body could play the role of a nonvirtual body. Nozick’s third worry was that virtual reality is artificial. Maybe there’s something to that for people who value the natural over the artificial. But this worry applies equally to cities, and plenty of people live meaningful lives in cities.
I, myself, don’t see why you shouldn’t be able to live a life in a virtual environment that is as meaningful, as fulfilling, at least in principle, as life in a non-virtual environment. Of course, for now the virtual environments are very stripped down and sparse, and the entities’ reactions within them are not as rich as in ordinary physical reality, but give it a few years.
Within a decade or two I’m sure we’re going to have virtual reality that begins to be, visually and auditorily at least, indistinguishable from worlds like ours. Some things are going to be harder: the role of the body, hunger, thirst, sex, birth, death. Some things are going to take a while to build into virtual reality, but I suspect within a century there will be Matrix-style virtual reality, which is more or less indistinguishable from our kind of reality.
Would there, in that Matrix-style virtual reality that is indistinguishable from our reality, perhaps better in some ways—it may contain features that we can’t even imagine now—be something fundamentally deficient about that reality? Such as if you chose to spend your life in it you’d be somehow missing something vital that gives life meaning? Some people take that attitude to virtual reality, but I’m inclined to think that no, there’s nothing that would be missing in principle. One’s life there could be just as meaningful. And it may well turn out that if we treat our world badly, then living in a virtual environment is going to be an attractive option. For some people the world may be much more inhabitable. I think one finds one’s own meaning in life, and in principle one can find just as much meaning in a virtual world as in a nonvirtual world.
I’m sure we’re going to face a debate, maybe even a political debate, about the division between the virtual and the non-virtual. There’s probably a lot to be said on either side, but I don’t see any fundamental philosophical obstacle to living a full meaningful life in virtual reality. Virtual reality need not be a second-class reality.
~ ~ ~
I’ve always been fascinated by technology. I was twelve in 1978 when the first microcomputers came out. I remember going to the store and playing with a TRS-80. Before long, when I was fourteen, I got my first Apple II computer. My parents gave it to me for a Christmas present. I loved playing with these computers. The very first virtual reality I inhabited, which was not an immersive virtual reality, was an old text adventure game which took place in Colossal Cave. You interacted with dwarves and treasures, and you threw axes at them all through a text interface, but nonetheless, you got the sense of a virtual world. This system of caves seemed very real to me. And it was implemented on a computer. Later on virtual reality got much, much better: You got image technology and it eventually turned 3D, but some of the basic ideas were present back then. What is the nature of this reality I was interacting with in Colossal Cave?
I studied math, and then I went on to do my PhD in an AI lab in Doug Hofstadter’s artificial intelligence lab in Indiana where I was surrounded by people working on artificial minds, or at least on artificial cognition—thinking about how you can get the processes of the mind running on computers. I did my own PhD in philosophy and I concentrated on consciousness, but at the same time I was doing a lot with computers.
I’ve basically always been a computer junkie. It seems to me that computers are especially interesting to a philosopher and to anyone else because they give ways of artificially recreating almost anything you like. They recreate the underlying structure. Anything you like can be at least simulated and partially recreated on a computer.
I thought for a long time about that in the case of the brain. What happens, for example, if we recreate the structure of the brain on a computer by interacting silicon chips where you had interacting neurons? Would it be missing anything that’s missing in the mind? I’d like to think that a computer could be conscious. I don't think consciousness is reducible to a pattern of interactions in the brain, but I do think that if one reproduces that pattern of interactions in fine enough detail, one will reproduce consciousness. And those patterns of interaction can in principle be replicated in a computer.
On this way of seeing things, computers are remarkable devices that can recreate almost any pattern of interactions. In principle there can be circuits on a computer that interact in the same patters as neurons do, or as physical particles do. Given that this pattern of interaction is what matters for a mind, or a world, a computer can recreate what matters for a mind or a world.
There’s a long tradition of thinking about technology in philosophy, and especially about information technology, ever since the computer age got going in the middle of the 20th century. Alan Turing himself published his original article on The Turing Test in the philosophy journal Mind around 1950, and rapidly afterwards philosophers began paying attention.
Hilary Putnam, the great philosopher in the 1960s started publishing articles on minds as machines, the computer model of the mind that led to what they called functionalism, which is basically a computer-inspired philosophical theory of the mind. And it never looked back, at least for some decades. Dan Dennett did all kinds of extremely important work that brought in computational thinking, to thinking about the mind and thinking about AI. Jerry Fodor, who's another philosopher of mind—he was not quite so keen on AI, but he was nonetheless giving a computationally grounded theory of mind from the start. So these ideas have been very important in philosophy.
When I was a graduate student in the late ‘80s and early ‘90s, many people in the philosophy of mind were thinking hard about artificial intelligence. Then the big thing was neural network theories of the mind and where these were going. I worked a lot on those and even built a few neural network systems that I published papers on. Interestingly, the bottom dropped out for a while in the mid ‘90s when the neural network movement hit a brick wall. There was an “AI winter” and AI dropped off the wider intellectual scene for a while except in certain small parts.
For ten or fifteen years, philosophers weren’t paying that much attention. A lot of people weren’t paying that much attention to AI. Now suddenly in the last two or three years all that has really changed again, in part because of the resurgence of AI. Interestingly, the very same neural network technology that was big when I was a student, now rebranded as deep learning, which is powered by much greater computational power, bigger data sets, and bigger networks, has proved to be able to do things far beyond what it could do back in the ‘90s. That’s driving some of the interest.
Someone once said, “If a mind was so simple we could understand it, we’d be too simple to understand the mind.” So maybe there is something essentially complex and inexplicable about the mind. Maybe part of the charm of deep learning is that it’s a machine-learning system that develops things we don't fully understand and couldn't have predicted in advance.
The technology of deep learning is not something new. It’s the same technology of neural networks and backpropagation that was big twenty-five years ago. It’s crossed a threshold though into being useful in a way that it just wasn’t. Image recognition, speech recognition, and so on have just gotten to the point where they’re now useful in a widespread way. Companies like Google and Facebook that have all that data have made things possible that simply weren’t possible before.
I suspect there’s not going to be another AI winter. The technology is just too useful and now it’s going to have massive amounts of funding poured into it through the industry, whether it’s for autonomous vehicles or machine recognition. But it does leave the question open of whether there is going to be a fundamental intellectual breakthrough. We’ve had small breakthroughs in the last few years, but not yet a major breakthrough.
One thing I’m interested to see over the next decade is what these deep learning systems can do and what will turn out to be the fundamental obstacles. It wouldn’t surprise me if we come to the view that we still need three fundamental breakthroughs to get to human level AI and that ends up taking another fifty to 100 years.
There are also a lot of ethical questions. I always swore that I was never going to write about ethics. Even though ethics is part of philosophy and philosophers are supposed to be experts on ethics, I thought I didn't have any special ethical insight. That's not where my talent lies, so why should anyone listen to me? But lately I’ve been getting drawn in a bit to ethical questions. I got an invitation from the Presidential Commission on Bioethics to talk about the Obama BRAIN initiative where they’ll be potentially recording brain patterns. It starts to bring in ethical questions about uploading your mind onto a computer, and privacy, and identity implications.
Artificial intelligence, in particular, raises some pretty deep ethical questions because of the huge impact of AI on the world. It’s coming up, even in the near term, with debates about autonomous weapons, autonomous vehicles, and how we can control those so that they don’t kill too many people. More subtly, with things like machine classification technology, how to avoid machine learning systems basically giving you racist or sexist outcomes when they, for example, classify people of certain races or from certain zip codes as being ineligible to receive loans or as more likely to commit criminal offences.
We had a conference at NYU recently on the ethics of AI focusing partly on those issues, including the long-term issues. What happens when machines are as intelligent and as powerful as humans, and beyond? Everything that’s going to happen then is going to be, at least in part, a function of what those machines want, what they have as their goals, what they have as their values.
We need to think very seriously about the values that go into those machines and how machines will go about following. And this is a place where philosophers have had something to contribute over the years about what the fundamental values are. There’s an interesting project of thinking about what are the kinds of values you can instill in a machine.
Who controls the values and the goals that go into the AIs? Is it Google and Facebook—the industry—that gets to program these things? Is it the government? Is it somehow a collective? Is it just whoever is lucky enough to invent the first greater than human level AI? There’s a debate to have about that. Whose values? We don’t have a single set of collective human values. Whose society? What’s going to happen? It’s already happening. People thinking about the future of AI are having this debate. In a way it’s recapitulating the whole history of political philosophy when we think whose values are going to run our society, and how we collectively determine what those values are.
~ ~ ~
I started out as a mathematician. I was always a math geek. Growing up I was doing math competitions, and math this, and math that, and I got through my undergraduate degree in Australia, and I went to Oxford. I was always going to study math, partly because it seemed to me that math was the most fundamental level of explaining things in reality. But then I came to think that philosophy was even more fundamental, so I switched fields.
We’re used to the idea that there’s a great chain of the sciences: chemistry explains biology and physics explains chemistry. It always seemed to me that mathematics was at the base even of physics and somehow was more fundamental, and that was why I was drawn to mathematics. I never knew about philosophy, but eventually it came to me that if you want to understand all of those things—mathematics, physics, chemistry, biology—you have to ask some philosophical questions about the nature of reality, about explanation, about how everything fits together.
In particular, I was drawn to what are the fundamental questions about reality. By the late 1980s, so much of the world seemed so well understood. Math—pretty well understood. Physics—not perfectly understood, but pretty well understood. What is it that we fundamentally do not understand? And the thing that stuck out to me like a sore thumb was consciousness. The human mind, in general, but particularly, consciousness. Subjective experience. We just did not have a clue as to how subjective experience fits into our standard scientific picture of reality. That’s something that many people will still concede today. So I said, “Okay, well, that’s what I want to think about. There’s a topic here that is absolutely fundamental, absolutely familiar, but still absolutely ill-understood. It’s a complete mystery. So how can we study consciousness?”
And to me, the best way into the subject was through philosophy. I could have gone into psychology or neuroscience and sat at the bench or done experiments, but it seemed to me those were always going to be fairly limited and piecemeal approaches, especially at the beginning as a young scientist before tenure.
To take the big picture on consciousness, the best way to do that was through philosophy where you can exploit the work that scientists are doing, but try to integrate it all together. People like Dan Dennett and many others have shown that that’s possible. I got drawn into thinking about consciousness while still thinking about computers and everything else. Eventually it seemed to me there was a huge problem that we didn’t understand, and I ended up writing a book all about the hard problem of consciousness. I spent a lot of my life working on the topic of the place of the mind and consciousness in the physical world.
But at the same time, this fascination with computers has never gone away. As a philosopher you get to think about the nature of the mind and the nature of the world, and my focus has lately been going more and more to the technology and to the world side of the equation. An intermediate point was when I started doing work with Andy Clark on the extended mind, which is the idea that the technology that we use somehow becomes part of our mind.
My smartphone, for example, has become part of my memory system. We used to use our biological brains to remember phone numbers. Who uses their brain to remember a phone number anymore? Your smartphone does it instead. Andy and I argued that our technology was literally becoming extensions of our minds. This is extending the mind out into the world. And this is something which digital technology does for us. Virtual reality technology is taking this just another step further with these artificial worlds.
Coming very soon is going to be augmented reality technology, where you see not only the physical world, but also virtual objects and entities that you perceive in the middle of them. We’ll put on virtual augmented reality glasses and we’ll have augmented entities out there. My face recognition is not so great, but my augmented glasses will tell me, "Ah, that’s John Brockman." A bit of AI inside my augmented reality glasses will recognize people for me.
My firm prediction is that smartphones are going to disappear the way that pagers did. Everything is going to be in our glasses. Who’s going to need a smartphone once you’re wearing glasses or contact lenses that just project screens into the world in front of you far bigger, more readable, more engaging than anything you can get on a smartphone or even a desktop computer, and much more interactive? That’s in development. In two or three years that stuff is going be out there more. In ten or fifteen years my guess is that’s going to become our fundamental way of interacting with computers. There's not going to be such a need for screens in everyday life.
I’ve been talking here as if AI is one topic and virtual reality is another topic, but the day is coming when they are integrated. When, for example, we use glasses to project an augmented reality, a computationally generated environment with all kinds of information partly driven by AI. It will recognize people for us. It will give us recommended routes.
At that level, artificial intelligence will start to become an extension of my mind. I suspect before long we’re all going to become very reliant on it. I’m already very reliant on my smartphone and my computers. These things are going to become more and more ubiquitous parts of our lives. The mind starts bleeding into the world. So many parts of the world are becoming parts of our mind, and eventually we start moving towards this increasingly digital reality. And this raises the question I started with: How real is all of this?
It’s ultimately going to be an interaction of artificial or augmented minds with artificially augmented worlds. It’s this augmentation, this mixing of the natural, the physical, and the artificial both on the side of the mind and on the side of the world that’s in our future. AI augmenting the mind, virtual reality augmenting the world, and the two of them all in interaction. That’s my vision of the future.