**THE CUL-DE-SAC OF THE COMPUTATIONAL METAPHOR**

**RODNEY BROOKS:** I’m going to go over a wide range of things that everyone will likely find something to disagree with. I want to start out by saying that I’m a materialist reductionist. As I talk, some people might get a little worried that I’m going off like Chalmers or something, but I’m not. I’m a materialist reductionist.

I’m worried that the crack cocaine of Moore’s law, which has given us more and more computation, has lulled us into thinking that that’s all there is. When you look at Claus Pias’s introduction to the Macy Conferences book, he writes, "The common precondition of the three foundational concepts of cybernetics—switching (Boolean) algebra, information theory and feedback—is digitality." They go straight into digitality in this conference. He says, "We considered Turing’s universal machine as a 'model' for brains, employing Pitts' and McCulloch’s calculus for activity in neural nets." Anyone who has looked at the Pitts and McCulloch papers knows it's a very primitive view of what is happening in neurons. But they adopted Turing’s universal machine.

How did Turing come up with Turing computation? In his 1936 paper, he talks about a human computer. Interestingly, he uses the male pronoun, whereas most of them were women. A human computer had a piece of paper, wrote things down, and followed rules—that was his model of computation, which we have come to accept.

We’re talking about cybernetics, but in AI, in John McCarthy’s 1955 proposal for the 1956 AI Workshop at Dartmouth, the very first sentence is, "We propose a study of artificial intelligence." He never defines artificial intelligence beyond that first sentence. That’s the first place it’s ever been used. But the second sentence is, "The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." As a materialist reductionist, I agree with that.

The second paragraph is, "If a machine can do a job, then an automatic calculator can be programmed to simulate the machine." That’s a jump from *any* sort of machine to an automatic calculator. And that’s in the air, that’s what we all think. Neuroscience uses computation as a metaphor, and I question whether that’s the right set of metaphors. We know computation is not enough for everything. Classical computation cannot handle quantum information processing. Is that right, Seth?

**SETH LLOYD:** Apparently it can’t. I agree.

**FRANK WILCZEK:** Sure it can; it's just slower.

**NEIL GERSHENFELD:** It’s expensive.

**BROOKS:** It’s a very different sort of thing.

**LLOYD:** Apparently it can’t do it efficiently.

**BROOKS:** My point is that I don't think that classical computation is the right mechanism to think about quantum mechanics. There are other metaphors.

**STEPHEN WOLFRAM:** The formalism of quantum mechanics, like the formalism of current classical mechanics, is about real numbers and is not similar to the way computation works.

**BROOKS:** Who is familiar with Lakoff and Johnson’s arguments in *Metaphors We Live By*? They talk about how we think in metaphors, which are based in the physical world in which we operate. That’s how we think and reason. In Turing’s computation, we use metaphors of place, and state, and change of state at place, and that’s the way we think about computation. We think of it as these little places where we put stuff and we move it around. That’s our vision of computation.

I went back to Marvin Minsky’s book, *Computation: Finite and Infinite Machines*. It’s just a beautiful book. It was when Marvin was at his peak mathematical powers. In the introduction, he defines what computation is as something that a machine with a finite number of simple parts can do. That’s not all that physics is. Physics is something more complex than that. So, if we’re pushing things into that information metaphor, are we missing things?

The Mathematica website says, "The Church-Turing thesis says that any real-world computation can be translated into an equivalent computation involving a Turing machine." What does "real-world computation" translate into? The real-world phenomenon—what is that translation? Using these metaphors we think by, not only is it place, but it’s a countable world. Infinite precision is not there. It fails in quantum mechanics, et cetera.

I’m going to give you some examples of where computation is not a good metaphor at all for thinking about things. I'll start with polyclad flatworms. If you’ve ever been diving on a coral reef, you’ve seen polyclad flatworms. They’re tiny, frilly creatures around the edge that wander over the coral. They’ve got 2,000 neurons, so they're very simple. They can learn a little bit, but not much. In the late ‘50s early ‘60s people started to do experiments on them. They did brain transplants between these polyclad flatworms to see if knowledge from one would transfer to another when they did the brain transplant. But I suspect a grad student made a mistake one day because suddenly there’s a whole literature about what happens if you put the brain in the wrong way.

These flatworms are pretty primitive. They’ve got an eyespot, and this little frilly stuff that they use to walk with is also used to push the food into their feeding hole. Not much else. Their brain has 2,000 neurons at one end of their body, and there are four parallel ganglia going down the body. So, if you cut out the brain, you cut across these four ganglia, and you plop it into the other animal. By the way, when the creature doesn’t have a brain, it continues to live. It’s really bad at feeding, it can’t right itself, it's bad at walking, but it continues to live without a brain if it's in a nutrient-rich environment. When you plop the brain into the other one, if you put it in at a 90-degree angle, nothing good ever happens because the connectors are in the wrong place. But if you put it in backwards, well, the creatures walk backwards for a while and then they get better at walking and adapt.

As it turns out, there are three ways you can put the brain in. You can put it in backwards, you can put it in backwards and flipped, or you can put it in just flipped. If you study across the different versions of that, you see different behaviors come back at different speeds, though, some behaviors never come back. It's very different thinking about that as a computational thing. It seems that’s a developmental thing. When we’re going from a genome to the creature, a lot of it is building and developing, which is harder to think about computationally. That’s clearly what’s going on here. Maybe computation isn’t the right principle metaphor to be thinking about in explaining this. It’s some sort of adaptation, and our computation is not locally adaptive, rather, our computation is only globally adaptive. But this is an adaptation at every local level.

Here’s another example: Where did neurons come from? If you go back to very primitive creatures, there was electrical transmission across surfaces of cells, and then some things managed to transmit internally in the axons. If you look at jellyfish, sometimes they have totally separate neural networks of different neurons and completely separate networks for different behaviors.

For instance, one of the things that neurons work out well for jellyfish is how to synchronize their swimming. They have a central clock generator, the signal gets distributed on the neurons, but there are different transmission times from the central clock to the different parts of the creature. So, how do they handle that? Well, different species handle it in different ways. Some use amazingly fast propagation. Others, because the spikes attenuate as they go a certain distance, there is a latency, which is inversely proportional to the signal strength. So, the weaker the signal strength, the quicker you operate, and that’s how the whole thing synchronizes.

Is information processing the right metaphor there? Or are control theory and resonance and synchronization the right metaphor? We need different metaphors at different times, rather than just computation. Physical intuition that we probably have as we think about computation has served physicists well, until you get to the quantum world. When you get to the quantum world, that physical intuition about stuff and place gets in the way.

There are a few books out right now trying to explain quantum mechanics. There’s one by this guy, Anil Ananthaswamy. He’s got a whole book on the double slit experiment. I don't know if anyone knows Steve Jurvetson. He's a venture capitalist who has funded lots of interesting companies, including quantum computation companies. He read the book and it convinced him that the only possible interpretation of quantum mechanics was the multi-universe interpretation, because that particle has to go through one of those two slits, so it must go through both slits, which means there must be two universes at every instance. That level of explanation is getting so stuck in the metaphor that it drives how you think about things. He’s thinking about the particle as a thing instead of thinking of it as abstract algebra. What does a particle look like inside if it’s a thing? A lot of what we do in computation and in physics and in neuroscience is getting stuck in these metaphors.

By the way, the metaphors aren’t even real for computation. Danny, how many instructions do you think are running in parallel in a single x86 architecture, single core?

**W. DANIEL HILLIS:** A modern one? A dozen.

**BROOKS:** One hundred and eighty instructions are in flight at once. A metaphor of computations—this is where the number is, this is where the control is—is a fiction that is built out of some much more complex metaphor. We use the computational metaphor in a false way. Where the information is and how it’s used is smeared out in time and space in some complex way, which is why the Spectre bug has popped up—it's such a complex machine to simulate that metaphor for us that it breaks down.

I suspect that we are using this metaphor and getting things wrong as we think about neuroscience, as we think about how things operate in the world. It’s possible that there are other metaphors we should be using and maybe concentrating on, because with our current computational thinking we tend to end up doing our experiments and our simulations in unrealistic regimes where it’s convenient for computation. When we’re doing a simulation, we ramp up probability of events so that we get something to happen, and in the real world there are so many more instances of stuff happening out there, the probabilities can be much lower for the interesting stuff to happen. Maybe we’re operating in the wrong regimes in thinking about things, focusing on local optimization in our computational experiments instead of global diversity. We have fairly simple dynamics in our computational spaces because that’s what we can generate with computation.

We failed to see commonalities across many different things. I heard you talking about genetic algorithms and the way that they couple together and ratchet up in reality as distinct from our simulations. There may be all sorts of meta-behaviors that we’re not seeing that come together in some interesting way.

Over time, in physical reality, Turing came up with computation. It wasn’t radical, particularly. Any good late 19^{th}-century mathematician could be taught the basis of computation fairly quickly and they wouldn't say it's crazy. Whereas, if you take a 19^{th}-century physicist and try to teach them either relativity or quantum theory, they’re going to say, "Oh, wait a minute, this is weird stuff." Computation wasn’t weird stuff, mathematically. It was pretty logical.

In a sense, calculus wasn’t weird stuff. It was hard to come up with, but it wasn’t weird stuff. Maybe there are other ways of thinking that we haven’t pulled together yet that will let us think about neuroscience and behavior in different ways, give us a different set of tools than we currently have.

I pointed out in the note to John [Brockman] about a recent paper titled "Could a Neuroscientist Understand a Microprocessor?" I talked about this many years ago. I speculated that if you applied the ways neuroscientists work on brains, with probes, and look at correlations between signals and applied that to a microprocessor without a model of the microprocessor and how it works, it would be very hard to figure out how it works.

There’s a great paper in *PLOS* last year where they took a 6502 microprocessor that was running Donkey Kong and a few other games and did lesion studies on it, they put probes in. They found the Donkey Kong transistors, which if you lesioned out 98 of the 4,000 transistors, Donkey Kong failed, whereas different games didn’t fail with those same transistors. So, that was localizing Donkey Kong-ness in the 6502.

They ran many experiments, similar to those run in neuroscience. Without an underlying model of what was going on internally, it came up with pretty much garbage stuff that no computer scientist thinks relevant to anything. It’s breaking abstraction. That’s why I’m wondering about where we can find new abstractions, not necessarily as different as quantum mechanics or relativity is from normal physics, but are there different ways of thinking that are not extremely mind-breaking that will enable us to do new things in the way that computation and calculus enables us to do new things?

When I look back at the early days of the Macy Conferences, when I look back at the early days of computation, of AI, there was a jump to classical computation based on this very simple version of the physical world. It’s not clear to me that that is serving us well. For a long time, we got stuck because Moore’s law was happening so quickly, no one could afford to shift into different ways of thinking.

Danny, I don't know whether you agree with me or not, but I think your “Connection Machine” suffered from that. Moore’s law was happening so quickly that when you came up with a new way of thinking about computation, you were swamped by Moore’s law. Even if you had a good idea, it didn’t matter because you didn’t have the resources of the million people working on Moore’s law in classical computers, so you couldn't compete.

Today is the golden age of computers—you should go back to it because everyone is now looking for something new, even in classical computation, because Moore’s law has stopped driving that craziness. The reason for why we got stuck in this cul-de-sac for so long was because Moore’s law just kept feeding us, and we kept thinking, "Oh, we’re making progress, we’re making progress, we're making progress." But maybe we haven’t been.

* * * *

**JOHN BROCKMAN:** Have we just listened to the first talk of a pronouncement of the death of computer science by the former chairman of both MIT’s Computer Science Department and AI Lab? Is this a watershed?

**BROOKS:** No, I don't think it’s a watershed. I said this in a 2001 paper in *Nature, *which didn’t make a ripple.

**WOLFRAM: **When you talk about computation, there are two ideas that became prevalent. One is the digital idea and the other is the idea of universality. The thing that wasn’t clear at the time of Turing was how universal was the change. That wasn’t clear probably until the 1980s.

**BROOKS:** I’m not sure it’s still clear.

**WOLFRAM:** Physicists don’t necessarily believe that it’s universal. That depends on what the ultimate model of physics is. If the ultimate model of physics is something that can be run on a Turing machine, then it is universal in our universe. If it isn't, then it isn't.

**WILCZEK:** We have a pretty good model for the physical world for practical purposes. The ultimate model might be quite different. For practical purposes, anything you want to do in computation, we have the equation.

**BROOKS:** Are you willing to give up calculus for computation?

**WILCZEK:** No. You don’t have to.

**BROOKS:** Part of that is because the complexity of computation is very different from other physical processes.

**WOLFRAM:** One of the issues is, before discrete computation there’s this notion of universality. There is no similar notion that seems to be robust for continuous computation, for continuous processes. That is, the Turing machine turned out to be lambda calculus, combinators, all these other things, it turned out to be equivalent. You try and do the same thing with systems with continuous variables, there is no robust notion of universality.

**LLOYD:** Well, there’s a good one from Shannon, who came up with it during the same time as the early Macy Conferences. One of those less well known but still great papers is about universal analog computers, which is basically proof that analog computers made by Vannevar Bush back in the 1920s—with op amps, and tunable inductors, and resistors, and capacitors—could simulate any linear or nonlinear, ordinary or differential equations. So, there is some notion of universality for analog computation.

**BROOKS:** By the way, I didn’t realize until I was reading up for this meeting that Shannon was at the AI conference in Dartmouth in '56.

**GERSHENFELD:** Rod, I want to push further. You’ve thought about this for so many years. I think we all agree on everything you presented, but you didn’t talk about the step after.

**BROOKS:** No, I didn’t give any answer.

**GERSHENFELD: **So, now that you’ve given the talk, make an attempt. You’ve thought about this so long.

**BROOKS:** This is a mixture of continuous stuff. It’s a wide world of lots of stuff happening simultaneously with local dynamics. When you look at a particular process, and this happens in genetic algorithms as well as in the artificial life field—you talk about a bunch of these in "Cellular Automata"—you see a ratcheting process in which things ratchet up to order from disorder. It's something that looks like mush, but out of it, because of some local rules, comes order. It’s limited order, but then when you put different pieces together, which locally result in little pieces of order, you sometimes get much more order from the coupling of them. What calculus of that could you develop? I’m thinking there may be something around that, a language for explaining how local, tiny pieces of order cross-coupling across different places couple together to get more order.

**GERSHENFELD:** Is your picture H-theorem, like maximizing entropy? In stat mac, there’s a messy, interesting, complex history about how local interactions end up maximizing entropy.

**WOLFRAM:** When you have something that's flapping around all over the place and you want to organize it into a limited set of possibilities, that means there’s irreversibility going on—the number of final states is more than the number of initial states. I don't think that phenomenon, as such, is that profoundly phenomenal.

**BROCKMAN:** Danny, I’m interested in your response to what Rod was saying about the advent of massive parallelism.

**HILLIS: **Well, I don't think that was terribly profound. That was an engineering thing that was inevitable in the world. That was a shift in the way that we build things. I don't think it was the profound shift in thinking that Rod was talking about.

**BROOKS:** I was just saying it got buried. Even if it was a good idea, it got buried by that other one.

**BROCKMAN:** So, put yourself back at MIT. Do you have a Computer Science Department now? What do you have? How does this change?

**BROOKS: **Well, it hasn’t changed.

**BROCKMAN: **It speaks to what was going on with the Macy Conferences, where things were coming together, and they were trying to figure out metaprograms.

**BROOKS:** It should have more influence on neuroscience in the sense that neuroscientists have got so stuck on information theory as their metaphor that they’re probably not seeing stuff that’s going on. I’m worried about my colleagues in brain and cognitive science.

**TOM GRIFFITHS: **One question I was going to ask is the extent to which you think there are fundamental human cognitive limitations that are playing into that. You’ve made this distinction between weird stuff and not weird stuff. The example that you gave of Steve Jurvetson reaching that conclusion makes a lot of sense based on what we know about human intuitions about causality, which are that people expect causal relationships to be deterministic. If you go in with that premise, then that’s the interpretation you have to end up with.

There’s an interesting question about what the consequences are of human intuition, trying to grapple with systems that defy human intuition, and what the tools are that you can use for being able to get past that. For something like quantum mechanics, the tools are math. The mathematical system tells you how to do it, you don’t trust your intuition. You run the math and it tells you what the answer is. I’m not sure that there’s not going to be not weird stuff.

**BROOKS: **Yes. All of us here would be terribly surprised if we’re at the beach and we saw a robot dolphin come out of the water that had been built by dolphins. We just don’t expect dolphins to have the cognitive capability to do what we’re trying to do in artificial intelligence. We don’t think they have it, nor the dexterity.

**LLOYD: **We expect them to have better sense than to do such a thing.

**BROOKS: **Yes. On the other hand, neuroscientists or artificial intelligence people think that we’re going to be smart enough to overcome whatever limitations we have in the way we think about things in order to figure this stuff out. The pessimistic view is that maybe we’re stuck.

**GRIFFITHS:** In some ways, you can view deep learning as an example of a way that human intuition failed. At the moment, a lot of the advances that people are making in solving problems are the consequences of using these end-to-end systems, where instead of having a human engineer design the features and the first stage of processing and then pass it off to a machine-learning algorithm, you just build a system that goes straight from raw input to whatever you want as output, and then the system, given enough data, can do a better job of figuring out the right way of representing things to solve the problem. Yes, in some ways that’s a bit of a rebuke to our abilities as humans to intuit the right way of approaching certain kinds of problems.

**WOLFRAM:** When you talk about computer science, the question becomes, is there a science to computer science? You have this neuron, which is doing its thing and you can see that it works, can you talk about it in a way that sciences like to talk about things? That’s not yet clear.

**CAROLINE JONES:** Well, maybe it’s a kind of alchemy of binary production.

**GEROGE DYSON:** The Macy Conferences, just to remind everybody, started with Julian Bigelow in 1943. They [Bigelow, Rosenblueth, and Wiener] wrote this paper, "Behavior, Purpose and Teleology," and that was the paper that convened the first meeting. It was exactly the same question that John opened up with here.

**BROCKMAN: **We’re stuck.

**ALISON GOPNIK:** I want to push against the idea that we’re stuck. In some sense, the very idea of computation itself is an example of a bunch of human beings with human brains overriding earlier sets of intuitions in ways that turned out to be very productive. The intuition that centuries of philosophers and psychologists had was that if you wanted something that was rational or intelligent, it was going to have to have subjective conscious phenomenology the way that people did. That was the whole theory of ideas, historically.

Then the great discovery was, wait a minute, this thing that is very subjective and phenomenological that the women computers are doing at Bletchley Park, we could turn that into a physical system. That’s terribly unintuitive, right? That completely goes against all the intuitive dualism that we have a lot of evidence for. But the remarkable thing is that people didn’t just seize up at that point. They didn’t even seize up in the way that you might with quantum mechanics, where they say, okay, this is out there in the world, but we just don’t have any way of dealing with it. People developed new conceptual intuitions and understandings that dealt with it.

The question is whether there is something like that out there now that could potentially give us a better metaphor. It’s important to say part of the reason why the computational metaphor was successful was because it was successful. It was incredibly predictive, and for anyone who is trying to do psychology, if you’re trying to characterize what’s going on in the head of this four-year-old, it turns out that thinking about it in computational terms is the most effective way of making good predictions. It’s not a priori the case that you’d *have* to think about it computationally—you could think about it as a dynamic system, or you could think about it as an analog system—it’s just that if you wanted to predict at a relatively high level what a four-year-old did by thinking of them as an analog system, you’d just fail in a way that you wouldn't fail thinking about it computationally.

**JONES:** I’d love to hear your thoughts on Rod’s second proposal, that it be the metaphor of adaptation. This is how I take your contribution, that adaptation is a different metaphor than computation. I’d love to hear you examine how that is different from the computational model.

**GOPNIK: **Do you think it’s a different? That’s a question to ask Rod.

**BROOKS:** First, I want to respond that I agree completely with what you said. In reading some recent philosophy books, they’re arguing dualist positions. They say, "Well, the way you’re arguing against materialism in humanity says that computation can’t work, either." So, to me, it’s been very powerful in that sense, besides being a model.

What I’m trying to say is that perhaps it’s only a model of certain aspects, and there are other models for us to look for. Caroline, on this adaption, I don't have a good way of talking about it yet, so I can’t say how it applies. It’s an important difference. The way we engineer our computational systems is with no adaptation, and the way all biological systems work is through adaptation at every level all the time.

**PETER GALISON:** One part of your talk is saying there is this range of metaphorical domains—dynamic systems, control systems, biological adaptation, resonance models—different kinds of pictures, and of that panopoly, we’ve chosen the computational almost uniquely to pursue.

Your warning signal, as I understand, is that in doing that we’re limiting ourselves in certain ways, and there may be other ways we might be able to make things work.

Then there seems be a second question, which is, what do we mean by work? What is the goal? Given the goal, what of these metaphorical domains are best mobilized to achieve that goal, and are there other goals that we might have?

For instance, if the goal is prediction, then we may look at the system and say, okay, computation does pretty well at a certain kind of prediction, whether it’s end-to-end or something else, but we might have other goals—unification, or explanation, or understanding, or generalizability. I take it that that’s something which might tie to some of what Stephen was referring to when he questioned what we mean by a science. If we take science to be carved out by the predictive, then that may already predetermine how we value the different metaphorical precincts.

**BROOKS:** I want to add one little thing that is stimulated by what you just said referring to Stephen, and I want to hear what Dave Chalmers has to say. As computationalists, we live by building very concrete abstraction barriers, where the abstraction barrier is very tightly defined. This is different from what we see in biological systems, where it’s much more adaptive than the strictness that we see.

**DAVID CHALMERS: **Computation is a broad church. It’s possible to have an overly narrow conception of what computation comes to. The Turing machine is universal, but it also stimulates certain ways of thinking about computation as classical computation, which is a very limited model.

I see the history of computation since Turing as a progressive broadening that brings out the power of the framework of computation. For instance, you get to parallel computation, you get to embodied computation, you get the move to quantum computation, you can start thinking about continuous computation.

So, I think of computation as a very broad church. Rather than thinking about overthrowing computation and replacing it with something else, let’s think about the relevant kinds of computation, particularly for the kinds of things you were pointing to, like adaptive computation. There’s no contradiction between adaptation and computation. I take it there are people thinking about adaptive computation at all levels. Machine learning, in some sense, is adaptive computation. Okay, maybe you want a more robust adaptive computation than that. So, instead of looking for something to replace computation, let's look for the right kind of computation.

**BROOKS: **Let me give you an example that fits your model there. We went from the Turing machine to the RAM model, and current computational complexity is really built on the RAM model of computation. It’s how space and time trade off in computation.

One can imagine that if the digital abstraction of machines had not been quite so perfect as it was in the ‘60s, what could have become principle was how quickly does a 1-bit error propagate through computations, and how bad can it get? If that had been the basis, maybe we’d be in a totally different world about hackability because we’d have a completely different set of tools—still computational tools, but a different way of what the metrics were and what was studied, then we would have a different computer science, even though we’d still call it computation.

**CHALMERS:** When you say neuroscientists are hung up on information processing, well, they’re hung up on a certain very specific kind of information processing—maybe representational, using certain kinds of representational and information theoretical tools. Computation, as a framework, is much broader than that. You could be a neuroscientist working with computation, working with algorithms, and still look at a different kind of algorithm. Is there anything you’re saying which is not going to be addressable by neuroscientists saying let’s look at a different kind of algorithm?

**WOLFRAM: **The main distinction you’re making is about continuous versus discrete systems, which I’m not sure is a correct distinction.

**BROOKS:** There may be something somewhat different from that that we just haven’t seen yet in the large system of lots of processes happening without clear interfaces, and lots of statistical stuff going on—statistical just because you don’t know everything. There are many other structures there that we’re not very good at pulling up.

**WILCZEK: **One thing you mentioned, implicitly at least in the discussion of the worms, that seems quite fundamental is the question of openness versus closedness—the systems that have to take information from the world instead of being programmed by somebody. That’s a very fundamental distinction. That is also close to the issue of analog versus digital. The real world has a much more analog aspect and is also much less tractable. So, taking information from the real world and putting it into a machine through learning may lead to structures that are much more complex and intractable than things that are programmed.

**BROCKMAN:** Freeman [Dyson], you're the only person here that was around before people talked about computing. Can you talk about when computing become a subject?

**FREEMAN DYSON:** Well, of course it was a very active subject when I arrived in the States in 1947. Von Neumann was already planning his machine and ENIAC already was running. So, the computer age certainly started five years before. I'm sorry I wasn’t involved.

**BROCKMAN:** You observed.

**F. DYSON:** Indeed. I was plunged into it, which was a huge luck for me.

**BROCKMAN: **You were married to a computer person, a computerist?

**F. DYSON: **Yes.

**BROOKS:** By the way, when you read von Neumann’s book, *The Computer and the Brain, *which was published posthumously from a series of lectures he was working on, even though he was involved with Turing, it’s on the edge of Turing-ness in his conception of what a machine isn’t.

**WILCZEK: **He discussed in a very systematic way the choices he made in arriving at the von Neumann architecture and how it was quite different from a brain. He was very aware of this.

**WOLFRAM:** I don't think he appreciated Turing very well. You should read the recommendation that he wrote for Turing.

**WILCZEK: **At the end of his life he was also working on self-reproducing machines.

**BROOKS:** Right—the 29-state automata for self-reproducing.

**WILCZEK:** You can call it computing, but it’s not really computing.

**WOLFRAM:** They thought at that time that this idea of universal computation was one thing, but then the idea of universal construction will be another thing.

**WILCZEK: **Yes, that’s right.

**WOLFRAM: **That hasn’t panned out too well.

**WILCZEK: **Well, maybe it should.