Computation All the Way Down

Computation All the Way Down

Stephen Wolfram [6.19.20]

We're now in this situation where people just assume that science can compute everything, that if we have all the right input data and we have the right models, science will figure it out. If we learn that our universe is fundamentally computational, that throws us right into the idea that computation is a paradigm you have to care about. The big transition was from using equations to describe how everything works to using programs and computation to describe how things work. And that's a transition that has happened after 300 years of equations. The transition time to using programs has been remarkably quick, a decade or two. One area that was a holdout, despite the transition of many fields of science into the computational models direction, was fundamental physics.

If we can firmly establish this fundamental theory of physics, we know it's computation all the way down. Once we know it's computation all the way down, we're forced to think about it computationally. One of the consequences of thinking about things computationally is this phenomenon of computational irreducibility. You can't get around it. That means we have always had the point of view that science will eventually figure out everything, but computational irreducibility says that can't work. It says that even if we know the rules for the system, it may be the case that we can't work out what that system will do any more efficiently than basically just running the system and seeing what happens, just doing the experiment so to speak. We can't have a predictive theoretical science of what's going to happen.

STEPHEN WOLFRAM is a scientist, inventor, and the founder and CEO of Wolfram Research. He is the creator of the symbolic computation program Mathematica and its programming language, Wolfram Language, as well as the knowledge engine Wolfram|Alpha. His most recent endeavor is The Wolfram Physics Project. He is also the author, most recently, of A Project to Find the Fundamental Theory of Physics. Stephen Wolfram's Edge Bio Page


The question that I'm asking myself is how does the universe work? What is the lowest level machine code for how our universe works? The big surprise to me is that over the last six months or so, I think we've figured out a path to be able to answer that question.

There's a lot of detail about how what we figured out about the path to that question relates to what's already known in physics. Once we know this is the low-level machine code for the universe, what can we then ask ourselves about why we have this universe and not another? Can we ask questions like why does this universe exist? Why does any universe exist? Some of those are questions that people asked a couple thousand years ago.

Lots of Greek philosophers had their theories for how the universe fundamentally works. We've gotten many layers of physics and mathematics sophistication since then, but what I'm doing goes back to these core questions of how things fundamentally work underneath. For us, it's this simple structure that involves elements and relations that build into hypergraphs that evolve in certain ways, and then these hypergraphs build into multiway graphs and multiway causal graphs. From pieces of the way those work, we see what relativity is, what quantum mechanics is, and so on.

One of the questions that comes about when you imagine that you might hold in your hand a rule that will generate our whole universe, how do you then think about that? What's the way of understanding what's going on? One of the most obvious questions is why did we get this universe and not another? In particular, if the rule that we find is a comparatively simple rule, how did we get this simple-rule universe?

The lesson since the time of Copernicus has been that our Earth isn't the center of the universe. We're not special in this or that way. If it turns out that the rule that we find for our universe is this rule that, at least to us, seems simple, we get to ask ourselves why we lucked out and got this universe with a simple rule. I have to say, I wasn't expecting that there would be a good scientific answer to that question. One of the surprises from this project to try to find the fundamental theory of physics has been that we have an understanding of how that works.

There are three levels of understanding of how the universe works in this model of ours. It starts from what one can think of as atoms of space, these elements that are knitted together by connectivity to form what ends up behaving like the physical space in which we move. The first level of what's going on involves these elements and rules that describe how elements connected in a particular way should be transformed to elements connected in some other way. This connectivity of the elements is what makes up when we look at, say, 10100, 10400 of these elements. That's what behaves like space as we're familiar with it, and not only space but also all of the things that are in space—all the matter and particles—are all just features of this underlying structure and its detailed way of connecting these elements together.

We've got this set of transformation rules that apply to those underlying elements. In this set up, space is a very different thing from time. One of the wrong turns of 20th-century physics was this idea that space and time should always be packaged together into this four-dimensional spacetime continuum. That's wrong. Time is different from space. Time is the inexorable operation of computation in figuring out what the next state will be from previous states, where our space to something that is a more specific extent of, in this particular case, the hypergraph that knits together these different elements.

From the idea of this hypergraph being rewritten through time, when you are an observer embedded within that hypergraph, the only thing you are ultimately sensitive to is the question of which events that happen inside this hypergraph affect which other ones. What are the causal relationships between different events in this process of time evolution? From that, you get what we call a causal graph of what events affect what other events. It turns out that special relativity and then general relativity emerge basically from properties of that causal graph.

In our way of thinking about fundamental physics, there are three levels of description that end up corresponding to general relativity—the theory of space and time and gravity—quantum mechanics, and then the third level, which is something different.

In the lowest level of these models that we're constructing, the only thing we know about all of these elements is that they're just things. We know which things are related to which other things; for example, if we say that there are relations that involve pairs of things—binary relations—then we can say we've got these things and there are pairs that are related. We can draw that as a mathematical graph or a network, where we're just putting down points and joining them by a line. We happen to need a slight generalization of that, usually called a hypergraph in mathematics, where instead of just having relations between pairs of things, you can have relations between triples or quadruples of things.

You can't represent that with just a line between two things. It's like a bag of things that corresponds to each hyperedge. But that's a detail not really important to the big picture. The thing that is relevant is that the underlying rules just say that some collection of elements that are related in a certain way are transformed to some other collection of elements related in some other way.

The whole operation of the universe consists of just rerunning that particular rule a gazillion times. Maybe the gazillion is about 10400 for our universe, I'm not sure about that—that's based on one estimate of how this might work.

The first level is to understand, as you apply these rules, what are the causal relationships between applying a rule in one place, then that rule produces certain output, and that output gets used when the rule is applied again in the same place or in a nearby place. You can draw this network, this graph, of the causal relationships of what output is needed to feed the input to another updating event. That causal graph turns out to be our representation of space and time.

That causal graph has properties that reproduce special relativity and then general relativity, the theory of gravity. That's a feature of these models, that in the limit of a very large number of these little update rules, with certain assumptions—like the assumption that the limiting space of our universe is finite dimensional—it follows that what happens satisfies Einstein's equations for general relativity. Then the next level of this is to apply these transformations to this hypergraph, to this collection of relations. But there might be many possible places where a particular transformation might apply, which one should I run? Which one should I do? The next piece of these models is to do all of them, and what you'll build is what we call a multiway graph, which represents all possible updates that you can have done.

If you do one update it might allow you to do another update. If you don't do that update, it wouldn't allow you to do another update. It's not saying just do everything. There's still a lot of structural information in what could happen after what, and what can happen at the same time as what. So, this multiway graph turns out to be a representation of what in quantum mechanics people have thought about as the path integral. In classical mechanics, say you throw a ball, the ball moves in a particular definite trajectory. In quantum mechanics, the ball has many possible trajectories it follows, which are all weighted in a certain way, and what we observe corresponds to, say, some weighting or some combination of those trajectories.

In our models, that corresponds to what happens in this multiway graph, that there are these many possible paths that can be followed in the multiway graph. In quantum mechanics, we believe we measure definite things. It turns out it's very elegant and wonderful that in relativity we're used to this idea of reference frames, observers thinking about the universe in terms of their reference frame. Are they at rest? Are they traveling at a certain velocity? Are they accelerating? What is their state of motion? In quantum mechanics, we have this analog of reference frames, which we call quantum observation frames (QOF) that represent the way we're choosing to experience this multiway system of possibilities.

In any case, one can reproduce the various results of quantum mechanics. We're busily going through and trying to reproduce all the different things that show up in quantum mechanics. One of things we can do is take, for example, quantum computers and compile all that formalism into these multiway graphs. If you've got a quantum computer that's described in the standard formalism of quantum computing in this way, then you just run this program and you'll get a multiway graph that basically implements the same thing. So that's proof that these multiway graphs reproduce the physics of quantum computing.

In spacetime, a big result are Einstein's equations, which say that the curvature of space depends on the presence of matter. If you have a thing that is following a straight line, let's say you shoot a laser in some direction. Normally, you think the light from a laser just goes in a straight line. But when there's a massive object, like a star or a black hole, the path of that laser light will be turned by the presence of that mass. Einstein's equations describe how that turning works. They say that the curvature of space, the amount of turning, depends on the amount of energy momentum that exists in space.

In our multiway graph, we also think about paths through the multiway graph. We can also think about the presence of energy momentum in the multiway graph, the presence of energy momentum in the quantum system that is described by this multiway graph. Something really amazing happens, which is that Einstein's equations in the classical idea of space and time turns out to be exactly Feynman's path integral in quantum mechanics.

These various paths that are representing the possibilities in quantum mechanics are effectively being turned in this multiway space by the presence of energy momentum, or more specifically, by the presence of the Lagrangian density, which is a relativistically invariant analog of energy momentum. In other words, the core of quantum mechanics, which is the way that the phases work in the path integral, is the exact same phenomenon as the core of classical general relativity, the way that trajectories are turned by the presence of energy momentum in spacetime. That's a pretty cool thing that I'm excited about.

When we think about this multiway system, we're saying that this particular rule can apply in different places and in different ways—just do all possible applications of that rule. Now, go one level up from that. Let's say that not only are you applying a particular rule in all possible ways, you're also applying all possible rules. At every moment, you're looking at all possible rules that could be applied to update this piece of this thing that represents our universe. You might ask how you could ever conclude anything if you apply all possible rules at every possible point. It turns out that there's a thing called causal invariance, which is the thing that makes it possible to say definite things. With the spacetime case and the quantum mechanics case, it applies again here.

The main point is that at every event, you can have an update event that corresponds to every possible rule you might apply. You, as an observer of the universe, could choose a frame in which you're only considering the path through this ultra-multiway system of all possible applications of rules. You're only considering the path that corresponds to the application of one particular rule. It's like saying, I've got my way of describing the universe and I'm only going to consider that one; the fact that all these other possible paths are being followed, well, yes, but I'm not interested in those aspects of the universe—I'm just interested in the aspects of the universe that correspond to the particular rule that I've identified as being my reference frame for thinking about the universe. You might say, but there are all these other universes, and they're all doing different things. Because of this property of causal invariance, in the end it doesn't matter because they're all in some sense doing the same thing.

We're looking at essentially all these possible quantum states and arranging them not in physical space, but in a different kind of space—in this branchial space. In quantum mechanics, we have this notion of entanglement of some sort of relationship between two states. Branchial space is essentially a map of entanglement space. When you look at the extent of branchial space, you're saying, how entangled are these two quantum states? If they're very entangled, they'll be close in branchial space. If they're not very entangled, they'll be far apart in branchial space. In a sense, when we start talking about measurement in quantum mechanics, we're talking about looking at particular regions of branchial space in various ways. When we're applying a particular rule in different ways, we're representing those different possible results of different applications by this branchial space of possible results.

Let's come back to the case for applying all possible rules. We have this thing called rulial space, which is the space of outcomes from all these different possible rules. There's a lot we can say about these different reference frames with which we study rulial space. The main point here is that there is this ultra-multiway system that corresponds to the application of all possible underlying rules. The question of why we are looking at this universe and not another universe never has to be asked, because in this ultra-multiway system, every conceivable universe is in that. But because of this property of causal invariance and so on, it turns out that in some sense they all do the same thing. This sounds very bizarre. Let me give an indication of why we already know this has to be true.

One of the fundamental results of computation is this idea of universal computation. Go back to say, 1900, and you would say to somebody, make me a machine that will compute square roots. The person might construct this machine, which has all kinds of cogs in it, or maybe electrical switches, and it computes square roots. Now, I want to get a machine that figures out whether words are palindromes. Well, we've got to get a completely different machine off the shelf, with a completely different arrangement of cogs, gears, and switches.

That was the view of computation that existed before basically the 1930s. Then, as a result of Gödel and then later Turing, there was the emerging understanding that you didn't need to do that. You could have a single universal machine with a particular configuration of cogs and switches, and just by changing the way you set the machine up at the beginning—the programming of the machine—you could get it to compute anything you wanted to compute. People didn't know how universal that universal machine was. Gödel thought maybe it was universal with respect to mathematical things, but maybe human minds work differently. People in physics thought maybe it's universal with respect to things that you could make out of the digital computer, but it isn't the way physics works. Up until probably the 1980s, or even beyond, people thought that.

My own work on the relationship between physics and computation probably was an important piece in having people take the idea that universal computation is universal, even with respect to things in physics, more seriously. We still don't know that for sure, but within this domain of things that we can represent with a universal computer, one might say, well, I'm representing my model of the universe as something that can be computed with a universal computer—with a Turing machine, with my computer on my desk, or an infinite version of my computer on my desk.

One might say, why is the computer on my desk any better than a computer on your desk? Why is it better than a Turing machine? It turns out that they're all universal. You can program any one of them to do anything you can program any of the others to do, or you can even program one of them to emulate one of the other ones.

This idea of universal computation already tells one that once you say the universe is something that can be generated by a universal computer, that fact is telling you that there's some sort of singleness to the way of describing the universe. In this rulial space of all possible rules, everything that's there is representable by a universal computer. You can translate between reference frames by essentially having what amounts to one universal computer description emulate another universal computer description by giving it the appropriate programming to emulate that other thing.

The one ultimate fact would be that the universe is computational, that the universe can be represented by a universal computer. That fact is not self-evident. It might not be true. It might be that there are hypercomputers that go beyond the computers we can build with Turing machines, that our universe might be a hypercomputer. Though, I don't think it is. What we're learning from this adventure in studying fundamental physics is that there is a description of the universe in terms of ordinary universal computation, and we're finding the details of how that works.

When we find the rule for the universe, how come it's this rule and not another? How do we think about that? Within this space of all possible rules, we're finding a reference frame that is our way of understanding the universe. What this leads to is a realization that we have a certain way of describing the universe that is based on our senses, the way our physics has developed. It's useful for us to think about things that are fixed time but everywhere in space. We look around us, the speed of light is very fast compared to our sensory processing, so for us it looks like we're seeing everywhere in space at a particular time. That's part of our way of describing the universe.

If our primary means of sensory input was olfaction, then we probably wouldn't think about things that way because smells travel very slowly. They travel by diffusion of molecules through air. Our notion of simultaneity, our idea that it's worthwhile to describe the universe by a series of successive times where everything in space happens at that time, probably would be different. But we can imagine even vastly more extreme differences in descriptions of the universe. What we're ultimately doing in finding a fundamental theory for physics is finding a description of the physical world in the reference frame that connects with the things we're used to dealing with, in both our sensory input and the mathematics and physics we built.

I spent much of my life as a computational language designer, which presents a similar problem (though a bit easier). In computational language, you're trying to make a bridge between the way humans think and the kinds of things that happen computationally. What one is doing in finding this fundamental theory of physics is a three-legged version of the language design problem. We have humans over here, we have the physical universe over here, and we have computers over here. One way to describe the universe is just to say, look at the universe—here it is. It does what it does. We don't think that will be a satisfactory model of the universe. That's not something that pulls the universe into something that we humans can say we understand.

What we're trying to do is create, through the medium of computers, something that can be made to represent all the complexity we see in the universe. And then we have actual physics over here. We're doing this three-legged language design problem of trying to find this description language that will knit these three things together. I view that as being the purpose of our project. That's why it will be unsurprising if the description that we come up with for the universe is as much a reflection of us and our way of thinking as it is a reflection of something fundamental about the universe.

One of the consequences of this is that it gives one the idea that there are very different planes of description or even planes of existence in the universe. We view the universe in terms of these simultaneous moments in time, this idea of material objects that work in this way or that way. This is something that is much more specific to us than we imagined. Imagine an extraterrestrial intelligence and the problem of communicating. If these are entities within our universe, at least they have the same physics, right? But I think that's wrong, because what we realize is that there are forms of description of the universe that are utterly incoherent with the ones we have, where the things that are being identified as being knitted together and significant are just utterly different from the ones that we choose to do that way.

This notion that we knit together all these different points in space at a given time is specific to our experience of the universe, which is an experience where our speed of sensory processing is much slower than the speed of light. If we were a completely different size, say the size of a galaxy, that would not be our experience, depending on the speed of our processing. But it could be the case that the speed of our processing is very fast compared to the speed of light over the distance scales that we're interested in. Then our description of the universe would be very different.

That breaks the conundrum that I've long wondered about, where if we have the rule for the universe, how do we understand why it's this rule and not another. Having felt that I've got some understanding of that and why there's only one universe, why it makes no sense, you could say, well, that could be a copy of our universe. But can that be an incoherent universe, which is why we got this rule and that universe got that rule? The answer is no. Could there be entities within the universe that understand the universe in utterly different ways? Yes, but it's still the same universe.

One of the things I've been thinking about recently is whether we have any hope of explaining why the universe exists. Normally in doing science, we're doing inductive inference of some kind. We know certain phenomena, so there must be a scientific law and, therefore, this thing happens. We always imagine that our induction of nature is approximate. We say that we observe this and that, and we think it's going to do this, but we're not ultimately sure that's the way nature works. This project of ours will essentially reduce physics to mathematics. What we're saying is we can find an underlying rule for the physical universe in which the operation of the universe is as inexorable as generating the digits of PI, or as inexorable as multiplying two numbers together.

What our universe does is this inexorable computation that comes about through the consequences of this underlying rule. There's no, oh, we've made an approximation here, or, oh, there's going to be some more fundamental theory. No, this is it. Although we can't predict what the universe is going to do, because it would require as much computation as the universe itself goes through to figure out what it's doing, and since, among other things, we're embedded within the universe, we're never going to be able to outrun the universe. We just have to watch the universe unfold and see what it does.

Once we have this underlying rule, the rest of the universe is inexorable, and we can say, in what sense does that rule exist? We can write it down because it's some piece of mathematics. Why does the thing get actualized? Why is it not just where you could generate the digits of PI, but they're not actually being generated? It's not something where there are little critters living in the digits of PI and admiring their various configurations of digits. It's just, well, there's this abstract generation of the digits of PI. But there's something about the universe that's different from its pure abstract representation. Somehow, the universe has been actualized. Could we know why that happened, how that happened? What could we know about that?

I have a speculation, but I don't know if it's going turn out to be correct. It has to do with a version of the same approach that Gödel used to figure out things about where mathematics can't reach. Gödel's most famous result is his incompleteness theorem, which says that from within the axiomatic system of arithmetic, there are statements you can write down that are statements about arithmetic. But that axiomatic system of arithmetic, that finite collection of axioms, called Peano axioms, can't ever tell you the answer to the arithmetic question that you've asked.

There's also his second incompleteness theorem, which says that from within arithmetic, you can't prove or disprove that arithmetic is a consistent axiomatic system, that it can never establish something that is both true and not true. Why is that relevant to what we're talking about? Gödel took what seemed like a logical philosophical statement, "This statement is unprovable," and showed that there was a way of representing that statement in terms of arithmetic. What he did was to say that the idea of provability could be represented using Gödel numbering as the sequence of numbers and equations.

He essentially made a compiler, but then he went from "this statement is unprovable" down to a machine code that was a bunch of equations involving integers and arithmetic. He therefore showed that that statement was an arithmetic statement, but he then used the paradoxical structure of that statement to show that their existing statements, while they could be represented in terms of arithmetic, could never be reached with a finite proof in terms of the axioms arithmetic.

Why is that relevant to our case? I have a suspicion that it might be possible to essentially compile such a statement about the existence of the universe into something that can be essentially executed in the low-level machine code of physics. And if one can show that a metaphysical statement, a statement about the existence of physics, or, particularly, a paradoxical version of that statement is actually a statement that can be stated in terms of physics, then one has the potential to show that, in the end, for entities within our universe, there will simply be no way to establish finite proof of the existence of the universe. This is my speculation of how this might work.

How do you use methods that we know from mathematics, logic, and computation to address a question like that? The proof that exists is a proof that we cannot generate. Why are the abstract laws of the universe actualized? One would think that would be a question that would have been thought about in a time when theological thought was the leading edge of how people thought about the universe.

Spinoza is famous for having said that the universe is the thoughts of God actualized, which is to say that when we think about our underlying rule for the universe, God has no choice about how the universe operates because there's just this rule that's applied. It's kind of like these thoughts of God are the results of applying this rule and our universe is the actualization of that. The question is why is there an actualization?

As I said, I have this potentially mathematical logic way of trying to understand that. Gödel himself had an attempt at a proof of the existence of God that was a proof for mathematical logic. These proofs of things like that are fraught with issues. The question is, can you cut through all of that if one has an understanding of how our universe works? I don't know the answer, but that's the thing I'm curious about.

Of the things that we do today, what will look as primitive in their description as to say that there's an immortal soul? We might now say this is an abstract computation that is immortal in the sense that that abstraction has nothing to do with the specifics of brain tissue; it's just an abstract, almost mathematical computation. What is it that we talk about today that will seem similarly naive? One of the main things that I see is this idea that so much of the universe isn't worth describing. What do I mean by that?

For example, we just say lots of stuff is just random heat. We say the configuration of air molecules in this room, we don't need to describe it. It doesn't matter. Nothing that we do is affected by the detailed configuration of all of those individual molecules. Let's just not talk about that. Let's just say there's a certain temperature and pressure of air in this room, and that's all we care about. I suspect that will eventually seem very naive. Perhaps the reason we say that is we exist at a certain scale. We are not sensitive to these individual molecules and what they do. We're only sensitive to the aggregate effect of the pressure of these trillion, trillion molecules that might be in this region of air.

Even if we operated at a much smaller scale, we might be much more acutely aware that this particular molecule did this. When we operate at that smaller scale, we're operating in this multiway graph of quantum mechanics. That's a whole other level of complexity of description.

When it comes to quantum measurement, we've got the same thing. We want to say something definitely happened. Don't just tell me it's a superposition of quantum states. I want to know what happened. We insist on having this notion of definite things happen at a scale of relevance to our sensors. I suspect that's a place where in the end there will be forms of description of the world that are sensitive to many more aspects of what happens than the ones that we're currently dealing with.

It's like saying, there's sensory data that we have, but the true sensory data might be this giant list of all of these different detailed things about the particular configuration of molecules that form this elaborate ring of stuff that represented this computation that did this or that thing. I don't know how to describe it. If I did, we would be in the future now, so to speak. That's an example of something that adds humility to our view of science, where even though we're proud of the level of description that we're able to get, there are many more levels that we're not able to get.

When we think about physics, there are these different reference frames in rulial space that correspond to essentially utterly incoherent ways to describe the universe, possibly even in the way that human thinking has evolved. We have our particular logical scientific way to describe what happens in the universe. There are other ones. When people tell me about some Eastern philosophy approach to ways of thinking about the universe, I don't know what they're talking about. It's not something I've ever been able to wrap my brain around.

Quite possibly, there's a version of that kind of thinking that has the same relationship as our understanding of how computation works to the theological understanding of souls. There may be a similar prong that needs to be built in that direction to make more detailed results and ideas from that underlying different approach to thinking about the universe.

How can you talk about the universe as a thing when the universe is supposed to be everything? Why does that make sense? Our physical universe is a specific thing. There are things that one can imagine that are other than our physical universe. To say that we have a fundamental theory of physics is to say that we know the thing that corresponds to our physical universe, and it's not this thing over here that we can imagine but that is not our physical universe. What do I mean by that?

For example, in computation there are limits to what something like a Turing machine can do. Say to a Turing machine, give me a systematic way to predict the infinite time result of running a Turing machine. You could just run the Turing machine, but that's not going to work because you're asking what's the infinite time result from running the Turing machine. Unless you have a way to speed up the running of the Turing machine, it could take you an infinite time to answer that question.

Let's just imagine that you have an oracle for the Turing machine. That's something which says that you don't have to run it for an infinite amount of time; it can just tell you the answer. There are all these Turing machines that compute, and within the set of all those Turing machines, they're never going to be able to systematically answer this infinite time question. That question is going to look undecidable to all of those Turing machines.

Yet, you have this hypercomputer that says it can tell you the answer is seventeen. That's an example of something that is a type of thing that you can imagine. You can even make certain mathematical statements about it. You can imagine it, but the claim is that it's not actualized in our universe. In other words, it is not a tautological statement to say something about the universe.

To have a fundamental theory of physics is to say that this is the stuff that happens in our universe and this is stuff that never happens in our universe. For example, in our fundamental theory of physics, our universe simply doesn't do hypercomputation. You could ask what our universe will do after an infinite time, but you can't get the answer to that in our universe. You can imagine a more sophisticated universe that can always answer that question. What are the set of things that are actualized, and what are those set of things that are imaginable but not actualized? That's the sense in which it is meaningful to talk about the universe as a thing.

As entities within our universe, how can we say anything about what our universe does? There's this idea of computational irreducibility that I invented in the 1980s, which is a finer version of things like Gödel's theorem. If you have a computation, is it an irreducible computation or is it a computation where you can readily jump out and say what the answer is going to be? Let's say my computation just went 101010, and just kept on doing that. I say, what's going to happen after a million steps?

You can immediately say, well, a million mark II is zero and therefore you'll get a zero at that step. Are there computations which have the property that, within the class of computations that can be done by Turing machines, or as we now believe, our universe, there is no way to jump ahead? They are irreducible. The only way to get the answer to the computation is just to run the computation.

My claim of what I call the principle of computational equivalents is that computational irreducibility is ubiquitous among computational systems. In particular, it's what, for example, makes nature seem complex to us, because the computations it's running are of the same sophistication as the computations that are running in our brains. It's something where we can't readily predict what's going to happen because it's an irreducible computation. It's something where we just have to follow each step to see what happens. If it follows some computational rule, why can we say anything about what the universe does? Why isn't it all mired in computational irreducibility?

I began this project to try and figure out fundamental physics thirty years ago. I've stopped many times. And I had stopped for a longer period of time for reasons that might be interesting to talk about. When I restarted it, I expected that we might be able to say something about 10-1000 seconds after the beginning of the universe. But after that we will be so mired in computational irreducibility that we wouldn't be able to make big statements about the universe. It turns out I was wrong.

What we learned is that there is a layer of computational irreducibility. We already knew that within any computationally irreducible system, there are always pockets of computational irreducibility. What we realized is basically most of physics, as we know it, lives in a layer of computational reducibility that sits on top of the computational irreducibility that corresponds to the underlying stuff of the universe.

All of these statements about relativity and reference frames, and all of these equations that are global statements about spacetime sit in this layer of computational reducibility. And that's why we can conclude something from our models. It's also in a sense why we humans have the impression that we can say something about how things work in the world. It might be the case that everything we see in the world is just so incredibly irreducibly complex to understand that we can never say anything about what's going to happen. There are plenty of things where we can't say what's going to happen in the world, but there are plenty of things where we can, and those are the pieces of computational reducibility, and those are the things that physics has tied into.

We have models of physics, we can compute all kinds of things from them, why does it matter what the underlying low-level machine code of the universe is like? Some of it doesn't really matter. We can keep doing our engineering, keep doing our physics. I doubt there will be immediate technological consequences. In biology, if somebody says that we now know how life was first created on Earth, okay, that's nice, but it doesn't affect your average biomedical researcher at all to know that low-level fact. Same with physics. But here's something interesting.

One of the important moments in the history of physics was Copernicus' efforts in the 1500s. People like Ptolemy had all these schemes for computing positions of planets based on epicycles, with the assumption that Earth was the center of the universe and you could compute all these positions of planets. It was a pretty accurate way of doing predictions. In fact, the humorous thing is that people say epicycles were bad news, but if you look at how we compute positions of things in the modern world, we are mathematically using the equivalent of 10,000 epicycles.

Back in the 1500s, Copernicus was saying that we can redo the foundations of this. The actual computations work something like epicycles, but we can say the Earth is going around the sun. I don't know how many people cared about the technical mathematical details of what Copernicus did at the time, but probably wasn't a huge number. It didn't make much difference to how you compute things. Still doesn't. What did make a difference was the philosophy of a different foundation. Copernicus made people start thinking that it might be possible to have something which we can deduce from science that is that odds with our common sense. That was a fundamentally important observation that led eventually to a lot of modern science.

We're now in this situation where people just assume that science can compute everything, that if we have all the right input data and we have the right models, science will figure it out. If we learn that our universe is fundamentally computational, that throws us right into the idea that computation is a paradigm you have to care about. The big transition was from using equations to describe how everything works to using programs and computation to describe how things work. And that's a transition that has happened after 300 years of equations. The transition time to using programs has been remarkably quick, a decade or two. One area that was a holdout, despite the transition of many fields of science into the computational models direction, was fundamental physics.

If we can firmly establish this fundamental theory of physics, we know it's computation all the way down. Once we know it's computation all the way down, we're forced to think about it computationally. One of the consequences of thinking about things computationally is this phenomenon of computational irreducibility. You can't get around it. That means we have always had the point of view that science will eventually figure out everything, but computational irreducibility says that can't work. It says that even if we know the rules for the system, it may be the case that we can't work out what that system will do any more efficiently than basically just running the system and seeing what happens, just doing the experiment so to speak. We can't have a predictive theoretical science of what's going to happen.

Computational irreducibility is a very "from within science" kind of thing. It's as if science is explaining its own limitations from within the science itself, that science is the thing that makes immediate predictions about what's going to happen. But that isn't the true story of what science can do. If I'm to speculate on the longer-term consequences of success in finding the fundamental theory of physics and showing that it's computational, this realization that computation really is the fundamental thing in our world, we therefore have to take it seriously through and through. That's a potential consequence there, more so than saying, are we going to build warp drive by using the technology that exists now that we know how space really works.


Let's just talk a little bit about this project of mine to find the fundamental theory physics. This project emerged from a paradigm view of the world that I've developed over the last forty years or so from thinking about things computationally. I thought that what we would build is a prong that is separate from the traditional thinking about physics. The big surprise of the last few months is that a lot of what we figured out dovetails beautifully with a lot of interesting things that people have figured out in mathematical approaches to physics. It's a big surprise, and it's a wonderful unification.

As I look back, there were basically three big mistakes in the history of physics that made it more difficult to see what we're now seeing. One mistake is a Euclid mistake, which is the idea that space is continuous, a point is indivisible, that there's a continuum of space, that space isn't made from discreet atoms. Another wrong turn was in the early 1900s, the idea of spacetime. Einstein didn't really talk about that; he talked about space and he talked about time. Then Minkowski came along and said, mathematically it's convenient to package space and time together and think of them as the same thing. That was, in the end, a mistake. The same results emerge, but you think about them differently. The third one, more recently realized, is the description of how things work quantum mechanically, that quantum amplitudes have complex numbers. That's a mistake. In mathematical terms, a complex number can be described as a magnitude and a phase. Those really have to be separated as they come from different places.

I thought that what we were doing was going to be a prong that was completely separate from the traditions of modern mathematical physics. It's not the case. The machine code is different. The low-level motivations are different, but the mathematics that shows up has many commonalities. Take string theory, for example. In the original formulation of string theory in the 1960s, string theory was formulated as a series of strong interactions. It turns out that didn't work out. It was receipted, so to speak, in the 1980s as a theory of supergravity. My guess is that the stuff we've done will provide another receipting of the same mathematical structures, but with a different low-level machine code.

I'm observing a high degree of interest among lots of kinds of people in our project. We're seeing a bunch of people who don't right now work on fundamental physics who are starting to work on it. It turns out one of the things that comes out of our models is that the core problems of distributed computing are very similar to some of the core problems of understanding spacetime and quantum mechanics. There's a trade route that has been opened up. Similarly, with some areas of mathematics, there are areas of mathematics where we didn't know they would have any relevance to physics, but we now see that they have. Now when it comes to the core of the fundamental physics world, it's like everything, there's a diversity of responses.

One very common response is, what about this simple physics system, how does that fit in? One of the problems there is that our model is very complete. If you are doing some weird idealization that imagines that you can set up an experiment like this or that, well maybe you just can't do that in our model because our model is representing the real world. Another thing people say is, what can you predict from your theory? Usually when theories start, sometimes you're lucky and there's some immediate dramatic prediction. A lot of the time, it is a theoretical prediction that's the most important. And that's a lot of what's going on right now.

The theoretical prediction is we know a bunch of stuff about physics as physics is understood today. Can we reproduce those known things? Can we make the prediction that, as we go exploring our models, we're going to find that they correspond to existing physics? They may say some things that are different from existing physics, but they will successfully reproduce existing physics. Or is it going to be the case that as we poke in this direction and discover that our model doesn't work there, we'll have to add all kinds of extra stuff to reproduce existing physics?

We have a whole class of experimental predictions. There is one problem, which is that we don't know a scale factor. We don't know things like the maximum entanglement speed in quantum mechanics, which might be maybe 105solar masses per second, but it might be 10-18 solar masses per second, or it might be 1050 solar masses per second. We know there's some value, we just don't know what that value is. If you're going to do experiments, it's not so easy to design those experiments without knowing that.

The other thing to realize about theories is that experiments are hard. Newton, for example, in his Principia, made a prediction about the position of the moon. His prediction was wrong by a factor of two. Actually, it wasn't a prediction because he already knew the answer, and he already knew it was wrong by a factor of two. But he didn't give up on his theory because of that. He just said that the computation is hard, and the theory is still fine, but we need to work harder on the computation. It took another a hundred years before that computation was done reasonably accurately. Bad things happen both at the level of experiments that are done and at the level of the computations necessary to see how the experiments will come out. It's not the right approach, particularly at this stage in a very big theory like this to be saying, show me a particular experimental result, because it's usually quite a long tower to get to measurable things.

There are phenomena where you have to know the scale factor. You have to know the value of Planck's constant. And then it's very hard to tell people to go out and try to observe these things. There are predictions about models that say, for example, there will be a maximum entanglement speed in quantum mechanics; we just don't know how big it is. And that's a different idea that hasn't existed in other parts of physics.

It's exciting because it's this period when a lot can get discovered fairly quickly. Fundamental physics hasn't been in this state for a long time. It's been in this state before, a hundred years ago, roughly in the early days of quantum mechanics. It was a period of rapid discovery. But physics has not been in a rapid discovery state for a long time. And when a field is not in rapid discovery state, it develops a certain rhythm and a certain culture that's different from what it should have and can have in a rapid discovery phase.

In a field where one is seven academic generations away from the time when there was rapid discovery, the field develops a certain institutional character and cultural expectation. I have to say, I am pleasantly surprised at the level of absorption of what we're doing. It's an interesting dynamic because we do these live streams where we're showing the actual science being done, and we'll routinely get hundreds, thousands of people watching these things. That's a new and different way of experiencing science and the progress of science than has ever existed before.