"Life/ Consists of propositions about
WHAT A CONCEPT!
Freeman Dyson, J. Craig Venter, George Church, Robert Shapiro, Dimitar Sasselov, Seth Lloyd
SETH LLOYD is Professor of Mechanical Engineering at MIT and Director of the W.M. Keck Center for Extreme Quantum Information Theory (xQIT). He works on problems having to do with information and complex systems from the very small — how do atoms process information, how can you make them compute, to the very large — how does society process information? And how can we understand society in terms of its ability to process information? He is the author of Programming the Universe: A Quantum Computer Scientist Takes On the Cosmos.
SETH LLOYD: I'd like to step back from talking about life itself. Instead I'd like to talk about what information processing in the universe can tell us about things like life. There's something rather mysterious about the universe. Not just rather mysterious, extremely mysterious. At bottom, the laws of physics are very simple. You can write them down on the back of a T-shirt: I see them written on the backs of T-shirts at MIT all the time, even in size petite. IN addition to that, the initial state of the universe, from what we can tell from observation, was also extremely simple. It can be described by a very few bits of information.
So we have simple laws and simple initial conditions. Yet if you look around you right now you see a huge amount of complexity. I see a bunch of human beings, each of whom is at least as complex as I am. I see trees and plants, I see cars, and as a mechanical engineer, I have to pay attention to cars. The world is extremely complex.
If you look up at the heavens, the heavens are no longer very uniform. There are clusters of galaxies and galaxies and stars and all sorts of different kinds of planets and super-earths and sub-earths, and super-humans and sub-humans, no doubt. The question is, what in the heck happened? Who ordered that? Where did this come from? Why is the universe complex? Because normally you would think, okay, I start off with very simple initial conditions and very simple laws, and then I should get something that's simple. In fact, mathematical definitions of complexity like algorithmic information say, simple laws, simple initial conditions, imply the state is always simple. It's kind of bizarre. So what is it about the universe that makes it complex, that makes it spontaneously generate complexity? I'm not going to talk about super-natural explanations. What are natural explanations — scientific explanations of our universe and why it generates complexity, including complex things like life?
I claim that there is a very basic feature of the universe, which makes it natural for it to generate complex systems and complex behaviors. We shouldn't be surprised by this. It's intrinsic in the laws of physics. This is what Craig Venter was asking, what is it about the laws of physics that give us things like life? Not only that, we know what this feature is. Let me tell you what it is, and then I'll tell you what it has to do with life. Because the spontaneous generation of complexity is important for lots of things other than life. Remember, life is overrated. There's plenty of other interesting stuff going on in the universe other than life. Long after we're all dead, and maybe other biological forms — carbon-based forms — of life are dead, I hope that other interesting things will still be going on.
Okay. What is this feature that is responsible for generating complexity? I would say that it is the universe's intrinsic ability to register and process information at its most microscopic levels. When we build quantum computers, it's one electron: one bit, to paraphrase the Supreme Court. Because of quantum mechanics, the world is intrinsically digital. That's what the 'quantum' in quantum mechanics means: it says the world comes in chunks. It's discrete. And this discreteness implies that elementary particles register bits. Their state can be described by a certain number of bits. In the case of the electron spin, one bit. In the case of photon polarization, one bit of information. Bits are intrinsic to the way the universe is. It's digital. And this digitality at the level of elementary particles gives rise to a very digital nature for chemistry, because chemistry arises out of quantum mechanics together with the masses of the elementary particles and the coupling constants of nature and the electro-magnetic force, et cetera.
Quantum mechanics means that there are only a discrete number of species of chemicals. You can only put together two hydrogens and an oxygen to make a molecule in one way that I know of. This means that we can catalog chemicals in a discrete list — chemical number one, chemical number two, chemical number three — you can order it any way you want according to your favorite chemicals. But it's discrete. This digital nature of the universe actually infects everything, in particular life. It's been known since the structure of DNA was elucidated that DNA is very digital. There are four possible base pairs per site, two bits per site, three and a half billion sites, seven billion bits of information in the human DNA. There's a very recognizable digital code of the kind that electrical engineers rediscovered in the 1950s that maps the codes for sequences of DNA onto expressions of proteins. There's a digital nature to the universe, and quantum mechanics makes this happen.
But the digital nature of the universe doesn't immediately tell you why the universe is complicated, and why something like life should spontaneously arise. The fact that we're here doesn't tell us anything about the probability that life exists elsewhere in the universe. Because we're here, and so we have to be here in order to contemplate this question, this tells us nothing about the probability of life except that it can exist. That's why this kind of question that Dimitar is trying to answer by looking for planets and signatures of life elsewhere is so important. We really don't know how likely it is that life should arise.
So why does complex behavior arise? Well, the universe is computing at its most microscopic scales. Two electrons, two bits of information, every time they collide, those bits flip. It's just these natural interaction and information processing that we use when we build quantum computers. Now I claim — and I can claim this because this is a mathematical theorem, which is different from just mere observational evidence — that when you have something that is computing and you program it at random, just tossing IN little random bits of programming, that it necessarily generates complex behavior.
Einstein said, God doesn't play dice with the universe. Well, it's not true. Einstein famously was wrong about this. It was his schoolboy howler. He believed the universe was deterministic, but in fact it's not. Quantum mechanics is inherently probabilistic: that's just the way quantum mechanics works. Quantum mechanics is constantly injecting random bits of information into the universe. Now, if you take something that can compute, and you program it at random, then you find is that it will spontaneously start to generate all possible computable things. Why? Because you're generating all possible programs for the computer as you toss in information at random.
In fact the universe is computing. I know this, because we build quantum computers — in addition, I can see a computer over there, so the universe clearly supports computation. And if you program it at random to start exploring different computations, if you go out into the infinite universe, (observational evidence suggests the universe is infinite), then somewhere out there every possible computation is being played out. Every possible way of processing information is occurring somewhere out there.
Okay? I don't think this is controversial, but in some funny way it seems to get people's dander up. The fact that the universe is at bottom computing, or is processing information, was actually established in the scientific sense, back in the late 19th century by Maxwell, Boltzmann, and Gibbs who showed that all atoms register bits of information. When they bounce off each other, these bits flip. That's actually where the first measures of information came up, because Maxwell, Boltzmann, and Gibbs were trying to define entropy, which is the real measure of information.
What happens when you have a computer being programmed at random? The computer generates all possible mathematical structures, and one of the most important things it does is to generate other computers amongst these structures. As first proposed by Alan Turing in the 1930s, a universal computer is a device that can simulate any other computer. It can be programmed to simulate any other computer, in a simple fashion. Including itself.
If you program a computer at random, it will start producing other computers, other ways of computing, other more complicated, composite ways of computing. And here is where life shows up. Because the universe is already computing from the very beginning when it starts, starting from the Big Bang, as soon as elementary particles show up. Then it starts exploring — I'm sorry to have to use anthropomorphic language about this, I'm not imputing any kind of actual intent to the universe as a whole, but I have to use it for this to describe it — it starts to explore other ways of computing.
Now remember, chemicals are digital. There are only certain chemicals that can exist and the laws of chemistry are set catalogs of chemical reactions, potentially infinite in extent because the total number of possible chemicals can be extended as much as you want. You can make polymers longer and longer and longer — you can think of the laws of chemistry, which are actually in some sense simple being implied by quantum mechanics as being a catalog of this huge set of possible reactions, where if I produce chemical A, and chemical B, I put them together, then that produces chemical C in abundance. Or if chemical A and chemical B are there and chemical D is also there, then chemical C is not produced.
Now you can see the relationship of these kinds of reactions to logic, right — if A and B, then C — if A and B and D, then not C. I'm simplifying chemistry, of course, because there there are temporal dynamics as well. But those dynamics' if-then statements, the digital statements that lie at the bottom of computation, are an intrinsic part of chemistry.
The digital logic inherent in chemical reactions is extremely important in biology of course, because this is how the metabolism of a cell works. I receive this chemical and this other chemical; therefore I'm going to open this switch over there and turn up on this other chemical pathway. Chemistry has this computational nature embedded in it, which it inherited from the underlying computation that's going on in quantum mechanics in general. Chemistry itself, then, explores out there in the universe all possible combinations that are out there in the universe. Chemistry explores all possible computations, all possible things that could happen — including all other things that a computer can do.
Let's produce this self-reproducing structure, and then see what happens. Or let's see what happens when we produce this structure and this other structure and they react with each other — let's see what they produce. We don't know exactly what went on in proto-life, but we do know the sorts of things that go on in proto-life, even without knowing the exact chemical reactions that took place. It is not surprising that chemistry should produce more and more complicated structures, which then interact in more and more complicated ways, and go on to fill out more and more of the set of all possible chemical reactions, and then produce further computationally complicated structures, like, say, bacteria, or human beings, or computers.
Because there is an intrinsic capacity built into the laws of nature: this ability to process information in an open-ended fashion. And once things start doing that then they're very hard to stop. I call such things "complexors" — because they generate complexity automatically. From the mathematical or physical perspective, complexors are actually rather simple, because all they are is something that can compute, which is systematically exploring a wide variety of, or all, possible computations. Once you have such a thing, once such a thing gets popped into existence, set into motion, then it will produce complexity, whether you want it to or not.
We actually already know that at its most microscopic level the universe possesses this computational capacity, because we're building quantum computers every day. In these quantum computers, we store bits of information on individual atoms, we use the laws of electro-dynamics to process information in a complicated fashion, and then we get even more interesting complicated behavior like chemistry. We shouldn't be surprised at this complexity. This ability to produce complexity infects the universe at ever higher and higher levels.
What are the implications of this intrinsic capacity of the universe to generate complexity? There are a bunch of concrete implications. Let's start by testing hypotheses for the origins of life. The first thing that this capacity suggests is that since we know to a very high degree of accuracy a large fraction of reactions for simple chemicals, we can explore the consequences of those reactions. As Bob was just telling us, we don't know a lot of these reactions when we start to include interactions of various minerals. And that's true: we don't necessarily know what all the key reactions are, and I don't think that we should hope right away to be able to show how life exactly started on Earth, or elsewhere if it started elsewhere.
But we have a good chance of showing that something like life should start. If we start with the set of chemical reactions that we know (and we could guess where we don't know what they are), and we try to drive them in different ways, we would expect to see, from this computational ability, that we would start out with a very simple set of chemical reactions, then they start to produce more complicated chemical species, which then auto-catalyze, or catalyze sets of more complicated reactions, so would you see these species turning themselves on and then turning themselves off as they get consumed by later chemical reactions.
What you would hope to see is as this effective computation proceeds is that it would become more and more complex as time went on, and eventually more stable sets of reactions, for instance the citric acid cycle that Harold Morowitz is so fond of, would establish themselves as the dominant modes of operation. If we saw that happening, that would be very powerful evidence for how life occurred. You would not expect — it helps a lot that Bob and I just talked about this question — to reproduce the exact origins of life because (A) there are many possible sets of initial conditions, (B) the set of reactions could be driven in many different ways, © we don't know what these conditions are, (D) there's a huge number of possible ones, (E) because these interactions are non-linear and hence (F) chaotic in lots of cases, so that (G) they can be very sensitive to these initial conditions. You'd have to get very lucky to find the right ones right away. But you could establish that things like life could occur.
Just as important, you might also be able to establish no-go theorems. If we only involve a certain set of chemical reactions, it's not large enough to be computationally universal. It can only extend a certain amount and then it's just going to produce uninteresting things, such as ABABABABABABABAB — that's all it will produce. It will never produce a varied and intricate set of outcomes. And that you can analyze by looking at the set of reactions and saying these reactions alone are not enough. Hence if you look at your planets and say, hey look, this is what's going on on this planet, then we could say, okay, sorry — no life there.
There are lots of interesting things for life itself that we could look at. One interesting consequence — there are both good and bad consequences here — is that something like life, or the things that come afterwards, you're number 7, and — there could very easily be 7, 8, 9, 10, et cetera, keeping on going forever, if the laws of physics allow it. That's a good consequence. BUT it's not clear right now, given the way the universe is, that something like life could continue forever. If the dark energy persists at the same level that it is right now, then in not a very long time — a hundred billion years or something like that is the number that springs to mind — we're all screwed and nothing can still exist, simply because all matter will have been pushed outside every other piece of matter's horizon, and it can't communicate with anything else, and that's bad.
But it might also be that this dark energy level is continually decreasing, in which case the universe could survive forever. A chapter in Freeman's book from 1982 was very influential for me in thinking about this. He pointed out that if you're willing to slow down and get very large — so you have to slow down and get fat, essentially — then you can still collect free energy essentially forever, and keep on metabolizing and growing. But it would require different technology than just ordinary biological life.
That's the good news. The bad news, at least from the standpoint of a scientist, is the very feature that makes complex behavior arise spontaneously. The fact that something capable of computation will spontaneously generate complex behavior means that it's also not in general possible to calculate, a) whether it will do so in a particular circumstance, or b) how likely it is to do so.
In fact trying to figure out the possibilities for events early on in proto-life, just given the information we have today, is intrinsically probably a very hard problem. If we're lucky and the pathway is not so long, we could figure it out. But if the pathway is very long, and frankly I have to say that given the complexity of ribosomes and the way that life is organized right now, it smacks of being something which is the process of a long and complicated and arduous process of evolution at the metabolic level, prior to the individual level. And that means it could be very hard to figure out what happened. That’s potentially bad. On the other hand there is a good thing, which is there is a way to find out what's going to happen in life's future — that is you wait and see. I suggest we do that.
That's all I have to say. I can tell you why there's probably not life in dark energy. Or why there's not life in the first fraction of a second of the universe. But that would not be very interesting.
SASSELOV: Probably the limiting case of much later than the first second of the Big Bang. But three hundred thousand years later, but I bet that there are not enough chemical reactions, which can allow you to do the complex computation.
LLOYD: There's not a lot of free energy in the matter at that point.
SASSELOV: You can actually see that universe — it's observable.
LLOYD: The bizarre thing about the universe is that we understand the origins of the universe much better than we understand the origins of life. It's a simple system — we've nailed down that 13.7 billion years ago, first this happened, and then this other thing happened, and this happened, and this happened. That's why Dimitar can speak so confidently about how stars behave; it's really really well known. Whereas with regard to the first set of chemical reactions that started life, we just don't know what they are.
SASSELOV: What you said about when you set up these experiments, conditions where you lose — those are familiar to chemists — for example, when everything turns into black tar. Also things just go to equilibrium and not perturbed by any further energy, they just turn it into heat but the chemicals stay the same — that's another 'you lose' scenario.
LLOYD: Even though every atom carries information around with it, in the Big Bang most of that computation that's going on is pretty uninteresting: it's just a bunch of stuff that thermal equilibrium bouncing off of each other. To get interesting things to happen you need the source of free energy. For that gravitation has to kick in and take things out of thermal dynamic equilibrium.
DYSON: Yes. One of the laws of physics which is absolutely crucial, which you didn't mention, is the fact that objects bound together by gravity have negative specific heat.
LLOYD: That's certainly important.
DYSON: That is absolutely crucial. If everything has positive specific heat, as the 19th century scientists believed, then it means that hot objects then lose energy to cold objects. You are constantly losing free energy, and as hot objects lose energy they become cooler, and cold objects gaining energy become warmer. Everything goes into a uniform temperature and the universe dies and life cannot persist. That was talked about a great deal in the 19th century. They called it the 'heat death', when everything goes to thermal equilibrium so life couldn't persist. But it happens that gravity has the opposite effect; that if you have an object like the sun that's held together by gravitation, that in fact the more energy you give it, the cooler it gets. And the more it loses energy, the hotter it gets.
LLOYD: Yes. If you look at star clusters, they occasionally will kick out a star, and the star will escape to infinity. And if you then look at the other stars, they're huddled together more and they're moving faster. They've gotten hotter, effectively.
DYSON: It means that in fact energy flows from cold objects to hot objects, if they're bound together by gravitation, so that you get further and further from equilibrium. That's the basic reason why the laws of physics favor heterogeneity rather than homogeneity.
LLOYD: Yes, absolutely. That's extremely important. And indeed, it's not clear how far that will go, with this historic dark energy out there. It could be that dark energy is quite useful. We just haven't figured out what to use it for yet. Of course that's the key if you want life to survive forever; you have to do some tricky stuff to harvest energy from further and further away. If you take things and move them closer together, then you can take the energy out of them as you move them closer together. Of course if you do that too much, then they form black holes, and they're not as useful.
DYSON: Black holes are essential because they are sinks of entropy; you can throw entropy into black holes and it disappears.
LLOYD: That's the cosmic garbage problem we were talking about before is — the ultimate in recycling.
DYSON: You definitely need black holes.
TING WU: Earlier you were talking about 'if-then' statements. One of the things I find life so extraordinary at is self-correction — of chemical reactions as moving around certain pathways that are fairly predictable, they go a certain way — not that this would define life, but it's part of many lives, which is it will go down a pathway and it can sell-correct.
The most dramatic one is when DNA errors are corrected. There's a directionality there that isn't easy explained just by a chemical reaction. I don't like to anthropomorphize either, but it is as if life has a behavior — I shouldn't say a direction — but it's moving along a direction that may not be easily explained by 'if-then'. I was wondering if you could comment on self-correction or self-righting behavior — as a chemical reaction or not — which reminds me that as we try to define life, or figure out the features of life, probably the most puzzling part of life, which we don't have a grasp on, is behavior, and so maybe we're missing one of the key aspects — I know that as a biologist, behavior is almost a complete mystery right now. So we're trying to find life by many things except perhaps one of the most mysterious things life does, which is behavior.
LLOYD: Interestingly, this DNA correction mechanism which you allude to lies at the very beginning of my own field of quantum computing. In the 1970s Charlie Bennett looked at the thermodynamics of this DNA correcting mechanism, and when you are correcting errors you have throw information away because afterwards you want the DNA to be in the right state, independent of whatever error happened before.
TING WU: Whatever "right" is.
LLOYD: Whatever "right" is. In this case the DNA correction mechanism is detecting to see okay, do these two strands match, for instance; or are they complementary to each other — and if they're not, then you go back and you try to re-write them. Then the information about the error goes away, and it turns out that this has to generate entropy, because the laws of physics at bottom are reversible. They're only irreversible in the macroscopic sense — and that means you can never throw information away for good. So if I throw information about the DNA away, that information has got to go somewhere else. And so these interactions are entropy generating: you have to supply them with a source of free energy and drive them along. In fact if you supply them with too little free energy they'll go back in the other direction and they'll generate errors. So an error-correcting mechanism, if it runs in the wrong direction, is an error-generating mechanism, which is actually also — not to anthropomorphize it — kind of human behavior. The ability to operate in a stable robust fashion in the presence of noise and errors is a key aspect of life, and is not so easy to effect. Particularly at the level of individual quantum, may I say.
Let's now look at this question of behavior. This computational issue, the fact that things are computing, are processing information, according to things like 'if-then' statements, can be thought of as the origins of inscrutability of behavior, either of chemical reactions or of human beings. Let me phrase this in terms of computers, because then again I'm on safe ground because this is a theory I can prove. There's a famous theorem in computer science called the halting problem, which Alan Turing first proposed. He pointed out, just from the very fact a universal computer can simulate itself — remember we talked about universal computers being able to simulate itself or other computers — that you can construct self-contradictory statements. As a result, certain questions can't be answered by a computer. One such question is, If I change this one bit in this computer program, then it will it stop and give an output? This problem is called the halting problem; it means there's no way to compute what's going to happen when you set a computation in motion, other than actually waiting and seeing what happens. There aren't any shortcuts, is another way of saying it. If something's going through a complex computation, there's no logical shortcut that allows you to figure out what it's going to do, other than going through the computation and seeing what's going to happen.
What this means is that computers are intrinsically inscrutable. When you press 'return' today, everything looks exactly the same as when you did yesterday — today you press 'return' and your computer crashes. Right? Yesterday it printed out your manuscript, today it crashes and takes your manuscript with it. Has that ever happened to anybody here? It certainly happened to me.
This is a necessary part of digital computation. There's no way, in general, if a computer is performing complicated computations — and those computers are performing pretty complicated computations — to figure out what's going to happen except to do it. This also holds for chemical reactions. Because these chemical reactions have the same sort of 'if-then' quality that computations have. That's a simplified version of chemical reactions, of course, but the more complex version is at least that complex. It's at least as inscrutable. Even in the simplified 'if-then' picture of chemical reactions, the outcome of a complex set of chemical reactions is by necessity inscrutable. The only way to figure out what's going to happen, in general, is to let it go and see.
This is why if we're going to figure out what the origins of life are, we're going to need either to do some pretty major experiments and/or burn a whole bunch of super-computer power, because the only way to figure out what they're going to do is see what happens. And if it's true of computers and chemical reactions, it's certainly true of human beings. If I think of the thing that makes other people inscrutable — I'll just speak for myself; maybe you find me completely transparent — I find most interactions with other people inscrutable. Or even interactions with myself.
If I want to see what I'm going to do tomorrow, I'm a free agent, and I'm the only one that's going to determine what that is. But the only way for me to actually figure it out is to go through the thinking process and then to figure it out. The inscrutability of my own actions comes from in part from this essential logical feature that the only way to figure out what's going to happen in a computing system is to go through the computation. And certainly for other human beings who are at least as complicated as I am, I cannot model what's going on inside their heads, and even if I could, the only way to figure out what they were going to say or do would be to go through the complete thought process they're going through. Which I just can't do.
I would say that computers and chemical reactions share with human beings the feature of inscrutability of their behavior, and there's nothing to do about it. There are things you can try: you can get more familiar with them, you can try to model then better, but you're never going to eliminate the uncertainty and essential inscrutability, because it just is the nature of anything that's behaving in a logical fashion. Bizarrely enough it's like Spock: the Vulcan code makes him more strange and hard to understand than if you were actually irrational. It's rationality that makes us inscrutable rather than irrationality.
PRESS: You mean this metaphor of the computer very literally — you can literally envision the universe as sort of going through a set of procedures that you could trace back.
LLOYD: Yes, I don't even mean it as a metaphor.
PRESS: How do you avoid the Gödel trap, in the sense that there are things that exist that you can't possibly explain the origin of?
LLOYD: Exactly. The halting problem and Gödel's theorem are essentially the same problem — they're very closely related, and Turing knew about Gödel's work when he came up with the halting problem. In fact he came with the halting problem and the Turing machine because he wanted to write about Gödel's work.
Gödel's theorem is basically the Cretan-liar paradox, which comes from St. Paul's letter to Timothy, which says who's going out to preach to Cretans and St. Paul says, watch out for those Cretans — one of their own philosophers says all Cretans are big bellies, gluttons, and lascivious liars.
The question is, how do you treat someone who says, 'I am lying no matter what I say'. And in the logical sense this becomes a statement. Probably the best one — which is what Gödel used — is to construct a statement which effectively says, this statement cannot be proved to be true. So it's a logical statement within a set of axioms. And there are two possibilities: either the statement is true, or it's false.
Let's say it's false, if it's false then it can be proved to be true — but now you've proved a false statement to be true, and that's really bad because if one false statement can be true, then you can prove all false statements could be true. As my children demonstrate to me all the time. Dad, you just said — therefore you're unreliable in all ways. The only other alternative is that the statement is true, but it cannot be proved.
Such statement is one that is, as it were, inscrutable to the logical structure of the theory. It cannot be proved from the theory, but the only choice you have is to adjoin it to the set of axioms of the theory as an addition axiom. And once you've done that then there are more statements like this — Gödel's incompleteness theorem says that no self-consistent logical theory of beyond a certain complexity, basically complexity which allows it to compute, is complete. The theory can always be extended in a whole variety of different ways.
PRESS: It means that there have to be things in this universe which are not the result of the series of computations — in other words, they're true, because the truth in this example is something that's produced by these calculations, but you can't find the origins, you can't trace them.
LLOYD: I agree. In fact they can't be derived from those laws because quantum mechanics say the universe is not really a universe but a multi-verse: there are different branches to the universe, in which different possibilities are explored. I would say even in some branches you could say the false possibilities are explored, where the universe is inconsistent and then ceases to exist.
PRESS: It's possible then that life could be one of those things that you cannot trace the origins of.
LLOYD: It's conceivable — there has to be a kind of infinity built into the problem. Life presumably originated in some finite context, so it could conceivably be discovered. But the kinds of finite problems that are analogous to these Gödelian problems are things like NP-incomplete problems where there's a huge number of different possibilities, and you'd have to explore each one to find the answer.
SHAPIRO: I just want to emphasize, lest it slip away, one point which was in the middle of the conversation, which is basically that we may never be able to capture the actual circumstances that led to the beginning of our life here on earth, because environments may have been destroyed or circumstances changed of which there's no record, but there's every opportunity by experiment for searching elsewhere, to find what are the general principles involved in generating life.
LLOYD: Right. Suppose we start to do these experiments, both real and computational experiments, to say, okay, here' s chemistry, it's doing these funny autocatalytic interactions that are a computation, in the strict definition of a computation, and we're going to explore what happens. Then suppose that as we start doing that, we find things that give rise to complicated behavior. That certainly fits your definitions, Freeman, of proto-life — steps number one and two — and maybe even things that are like step three but what we get are totally different from ribosomes in step three. If this happens then that I would say is very strong evidence that we should expect to have life in all sorts of places, involving all kinds of different ways of living other than having ribosomes.
SASSELOV: That's the question of multiple vs. simple pathways to life. Just answering that question would be essential.
LLOYD: And that it is quite possible — even if it's too hard to figure out exactly how life originated on earth. This is a much easier question I think to answer than the question of how did life exactly originate on earth. Because there you have to figure out the exact initial conditions for this complicated set of chemical reactions and that's going to be hard.
SHAPIRO: And the other point of view has been very much pushed over the ages; I think George Wald once said that if you study your biochemistry text on earth you can pass examinations on Auctorus. Which is a star somewhere out there — and this is essentially saying the opposite.
DYSON: I did an experiment to demonstrate that genes don't determine personality — I have a pair of identical twin grandsons and it's a remarkable fact: these kids have exactly the same genes and exactly the same environment, and still they're totally different. One is George and the other is Donald, and they know the difference.
SHAPIRO: One may have had a more preferable location in the womb than the other.
DYSON: But anyway, it is a fact that the brain is random in its development, even when the genes are given.
LLOYD: Right, that's right — and also their experiences are different, so they're being, as it were, programmed by slightly different sets of experience — that could have a radically different effect on how they behave.
DYSON: But the microstructure of the brain is different, even quite apart from the experience.
LLOYD: Absolutely. Right. Yes, genes are overrated, too. Not just intelligence, life, consciousness.