Edge 106 — October 24, 2002

(10,560 words)



GOOD BOOKS: TOBY MUNDY

Doomsayers persist in the belief that the book world has been overrun by philistinism. They are wrong. Publishers can rejoice in unprecedented levels of both quality and quantity. We are living in a golden age of the book.

 
An interesting article on the state of book publishing in the October 2002 issue (#79) of the British magazine Prospect Toby Mundy, managing director and publisher of Atlantic Books in London. Mundy is also a contributing editor of Prospect as well as a member of the Edge community. [Click here for the stand-alone aricle.] [Click here for Prospect's home page.]

GENOMIC IMPRINTING : A TALK WITH DAVID HAIG [10.24.02]

The area to which I've given the greatest attention is a new phenomenon in molecular biology called genomic imprinting, which is a situation in which a DNA sequence can have conditional behavior depending on whether it is maternally inherited—coming from an egg—or paternally inherited—coming through a sperm. The phenomenon is called imprinting because the basic idea is that there is some imprint that is put on the DNA in the mother's ovary or in the father's testes which marks that DNA as being maternal or paternal, and influences its pattern of expression—what the gene does in the next generation in both male and female offspring.

10 THINGS YOU SHOULD KNOW ABOUT SECURITY, PRIVACY AND ENCRYPTION: BY RICHARD M. SMITH [10.24.02]


Edge Video

Until the '60s, governments were not really involved in car design. Then people like Ralph Nader started noticing that a lot of people were being killed in cars and made it clear why this was happening. We have spent the last 35 years or so designing safety into cars, and it's had a pretty dramatic effect. . . We're in that same era now with security on computer systems. We know we have a problem and now we need to focus on design.

New THE COMPUTATIONAL UNIVERSE: SETH LLOYD [10.24.02]

Edge Video

Every physical system registers information, and just by evolving in time, by doing its thing, it changes that information, transforms that information, or, if you like, processes that information. Since I've been building quantum computers I've come around to thinking about the world in terms of how it processes information.


New GOOD BOOKS: TOBY MUNDY [10.24.02]

Doomsayers persist in the belief that the book world has been overrun by philistinism. They are wrong. Publishers can rejoice in unprecedented levels of both quality and quantity. We are living in a golden age of the book. An interesting article on the state of the book publishing industry in the British Magazine Prospect by Toby Mundy, managing director and publisher of Atlantic Books. Mundy is also a contributing editor of Prospect as well as a member of the Edge community. Click here


GENOMIC IMPRINTING : A TALK WITH DAVID HAIG

Introduction

David Haig is an evolutionary geneticist/theorist interested in conflicts and conflict resolution within the genome, with a particular interest in genomic imprinting and relations between parents and offspring. The area to which I've given the greatest attention," he says, "is a new phenomenon in molecular biology called genomic imprinting, which is a situation in which a DNA sequence can have conditional behavior depending on whether it is maternally inherited—coming from an egg—or paternally inherited—coming through a sperm." Haig's work intersects with that of the evolutionary psychologists whose ideas have been presented on Edge. "A true psychology," Haig says, "has got to be an evolutionary psychology... We are evolved beings and therefore our psychology will have to be understood in terms of natural selection, among other factors."

JB

DAVID HAIG is Associate Professor of Biology in Harvard's Department of Organismic and Evolutionary Biology and author of Genomic Imprinting and Kinship.

David Haig's Edge Bio Page


GENOMIC IMPRINTING

DAVID HAIG: My work over the last decade or so has been principally concerned with conflicts within the individual organism. In a lot of evolutionary biology, the implicit metaphor is that the organism is a machine or, more specifically, a fitness-maximizing computer trying to solve some problem. Maximizing fitness is analogous to maximizing a utility function in economics. I'm interested in situations where there are conflicts within the individual, in which different agents within the self have different fitness functions, as well as the internal politics resulting from those conflicts of interest.

The area to which I've given the greatest attention is a new phenomenon in molecular biology called genomic imprinting, which is a situation in which a DNA sequence can have conditional behavior depending on whether it is maternally inherited—coming from an egg—or paternally inherited—coming through a sperm. The phenomenon is called imprinting because the basic idea is that there is some imprint that is put on the DNA in the mother's ovary or in the father's testes which marks that DNA as being maternal or paternal, and influences its pattern of expression—what the gene does in the next generation in both male and female offspring.

This is a complicated process because the imprint can be erased and reset. For example, the maternal genes in my body when I pass them on to my children are going to be paternal genes having paternal behavior. If my daughter passes on paternal genes to her children, even though she got the gene as a paternal gene from me it would be a maternal gene to her own offspring. Molecular biologists are particularly interested in understanding the nature of these imprints, and how it is possible to modify DNA in some way that is heritable but can then be reset. My own interest has been understanding why such odd behavior should evolve. I've been trying to find situations in which what is best for genes of maternal origin is different from what maximizes the fitness of genes of paternal origin.

The best way to understand the underlying theory is with a famous anecdote accredited to J.B.S. Haldane, the great British geneticist, who is said to have claimed that he would give his life to save more than two drowning brothers or more than eight drowning cousins. The logic is that if Haldane is only concerned with transmitting his genes to future generations, this is the right thing to do. On average, a gene in his body has one chance in two of being present in a brother. If he sacrificed the copy of a gene in his body to rescue three brothers, on average he'd be rescuing one and a half copies of the gene in his three brothers; placing him ahead in the genetic accounting. But when it comes to cousins, each only has one chance in eight of carrying a random gene in Haldane's body. To benefit from the sacrifice of one copy of a gene in himself, he needs to rescue nine or more cousins. This was formalized by Bill Hamilton in his theory of inclusive fitness.

My theory can be illustrated by rephrasing Haldane's question and asking: Would Haldane sacrifice his life for three half-brothers? For the sake of the story let's say that these are his maternal half-brothers—offspring of his mother but with different fathers. The traditional answer to that question is no, because if you pick a random gene in Haldane, it's got one chance in four of being present in a half-brother. Thus, a random gene would have an expectation of rescuing three quarters of a copy—three times one quarter—for the loss of one copy in Haldane. However, if imprinting is possible, genes may have information about their parental origin, and this can change the accounting.

From the point of view of a maternally derived gene in Haldane, the three half-brothers are all offspring of his mother, so his maternally derived genes have a probability of one-half being present in each half-brother. For the sacrifice of one copy of the gene in himself, Haldane would be rescuing one and a half copies, on average, of his maternally derived genes. Natural selection acting in that situation on genes of maternal origin would favor the sacrificial behavior.

However, things look very different from the point of view of Haldane's paternal genes. Those three half-brothers are the offspring of different fathers, making them complete non-relatives. If genetic accounting were all that was important, no sacrifice, no matter how small, would justify any benefit, no matter how great, to his paternal half-sibs. Therefore, in this case, selection on paternally derived genes would prevent Haldane performing this sacrificial action.

This illustrates that different selective forces can act on different genes within an individual, pulling him in different directions, resulting in internal genetic conflicts. I suspect that how these conflicts are resolved is a matter of history, genetic politics, and knowing the details of the system. To answer questions like these, a lot of insight is going to come from the social sciences. Political science in particular is all about dealing with conflicts of interest within society with the formations of parties and factions, and I believe that if there are conflicts within the individual, you'll have a similar sort of internal politics.

I'm particularly interested in looking at situations in the real world where the Haldane story I just gave would apply—where there are potential conflicting selective forces acting within the individual. So far I've talked about conflicts between genes of maternal and paternal origin, but there are also possible conflicts between genes sitting on the sex chromosomes and genes sitting on the other chromosomes, or between genes sitting in the nucleus and genes sitting in mitochondria, or between our genetic inheritance and cultural transmission. I'm trying to develop a set of theories and tools for dealing with such situations.

Genomic imprinting is a fascinating phenomenon, and raises an interesting question: If information about the sex of the parent in the previous generation can be transmitted by such mechanisms, is there other historical information input from the environment that can be transmitted to the current generation and influence genetic expression? Would it be possible that if my great-grandmother experienced a famine or lived in a time of war, that this has put an imprint on the genome which is influencing gene expression in my own body?

My interest in genetic imprinting began while I was completing my doctorate at Macquarie University in Sydney. I began studying plant ecology and, in particular, how regeneration after fire takes place. I wandered around the bush a bit looking at plants, but my heart really wasn't in that. Through good fortune I got an opportunity to do a theoretical study on the evolution of the life cycles of plants, applying kin selection theory—the theory of parent-offspring conflict developed by Robert Trivers—to plants. By thinking about what's happening within seeds, I essentially had a theory of genomic imprinting ready to go the moment I heard of the phenomenon.

In a 1974 paper on parent-offspring conflict Trivers pointed out that there was often an implicit assumption that what was good for a parent was also good for the offspring. In terms of genetic transmission, it would seem that offspring are parents' stake in the future, so parents should be doing their best for them. What Trivers argued, however, was that parents would be selected to maximize their total number of surviving offspring—which may be quite different from maximizing the survival of any particular individual offspring. He suggested that there is a tradeoff between producing lots of offspring and investing relatively little in them versus producing a small number of offspring and investing a lot in each. He thought that over evolutionary time offspring would begin to compete with their siblings for available resources. And in turn, sibling rivalry would result in conflict between offspring and parents, since over time offspring would be selected to try to get more than their fair share of resources from their parents—more than the parents were selected to supply—whereas parents would be selected to spread their resources more evenly over a larger number of offspring. Trivers's theory was that this could lead to evolutionary conflicts.

I was asked to talk at the National Institute of Health in a workshop on imprinting and human disease. My goal was to suggest how evolutionary theory would provide new insights into human disease. An obvious case was in human pregnancy, where Trivers's theory of parent-offspring conflict could help to understand why pregnancy is so often associated with medical complications. Since then, looking at maternal fetal interactions has been another area in my research.

Trivers's theory has a lot to say about why pregnancy doesn't work particularly well. If we look at most of the products of natural selection, like the hand, the liver, the heart, or the kidney, these are wonderful bits of engineering that function very well for 60 or 70 years. But why are there so many problems in pregnancy? Pregnancy is absolutely essential to reproduction, so you might expect that this would be one part of our human physiology that had been perfected by natural selection. But there is an important evolutionary difference between the function of the heart and what's going on in pregnancy. When we look at the selective forces acting on the function of the heart, there's no evolutionary conflict. All of the genes involved in the development and function of the heart belong to the same genetic individual and, in a sense, have the same genetic interest: the maximization of the number of offspring of that individual. In the absence of conflict we've got a simple optimization problem, and you get an optimal solution.

But in the relationship between mother and fetus—because of the parent-offspring conflict that Trivers pointed out—we've now got conflicting forces. The offspring is being selected to take a little bit extra from the mother, and the mother is selected to resist some of the offspring's demands. Those selective forces tend to act at cross purposes and cancel each other out.

One very important problem during pregnancy is the communication of information between mother and offspring. In communication within the body there's no conflict, since selection causes cells to send messages as cheaply and as efficiently as possible. But when you're looking at the exchange of messages between mother and fetus, there's a problem of credibility, since their interests are not identical. In some situations, there's an evolutionary incentive to send misleading messages, and corresponding selection for receivers to distrust messages being received.

One thing that's happening during pregnancy is that there's a lack of the usual feedback controls, checks and balances. I read grant applications for scientists proposing to study maternal-fetal relations, and they tend to portray it in very rosy terms, as an almost loving exchange of messages between mother and fetus. But in pregnancy an embryo implants itself in the abdominal cavity or in the fallopian tube—in a completely inappropriate position in the body—and develops autonomously in the absence of any appropriate maternal messages. I believe there's actually very little communication going on between the mother and the fetus during the pregnancy. Rather, you're looking at various fetal attempts to manipulate maternal physiology and metabolism for fetal benefits.

During pregnancy the mother's hormonal communication systems are coming under joint control of both the mother and the fetus. The fetus secretes a number of hormones into the mother's body to achieve various effects, particularly increasing the nutrient levels of the maternal blood. In the early stages of human pregnancy, the embryo embeds itself in the uterine wall and taps into the maternal blood system, releasing hormones into maternal blood that can influence the mother's physiology, blood sugar levels, and blood pressure. The higher the levels of sugar and fats in maternal blood, the more nutrients the fetus can obtain. Typically, hormones are molecules produced in tiny amounts that have big effects, at least when communication occurs within a single body and there is no conflict between sender and receiver. However, in pregnancy, one individual (the fetus) signals to another (the mother) and there is potential for conflict. Natural selection favors increased production of the hormones by offspring to get a bigger effect, while at the same time it favors maternal receiving systems that become more and more resistant to manipulation. There is thus potential for an evolutionary escalation that sometimes results in placental hormones being produced in absolutely massive amounts. It's estimated that about a gram a day of human placental lactogen is secreted into the maternal blood stream, and yet it has relatively minor effects.

I think this observation, that placental hormones tend to be produced in very large amounts, is the best evidence for the existence of maternal-fetal conflict. The fetus secretes these hormones into the mother's body in an attempt to persuade the mother to do something that she might not necessarily want to do. Think of placental hormones as the equivalent of the junk mail that you get in your mail box. These messages are trying to persuade you to do something. They're relatively cheap to produce so they're distributed in vast quantities but have relatively minor effects. They must work sometimes, but it's very different from the sort of intimate whisper you might get between two individuals who have common interests.

The most successful application of my ideas on imprinting has been to the study of growth during pregnancy, and the prediction that paternally derived genes are selected to produce larger placentas that extract more resources from mothers. But the basic idea of the theory applies to any interactions among relatives that are what I call asymmetric kin; that is, relatives on the maternal side of the family but not on the paternal side, or vice versa. I suspect that genomic imprinting is going to be relevant to understanding the evolution of social interactions. There's also evidence now that imprinting is implicated in some forms of autism. There are a number of imprinted genes that are known to be imprinted in the brain, and I'm interested in exploring those ideas.

The most exciting empirical work that's been done to test my ideas came out of Shirley Tilghman's lab before she became President of Princeton. Hers was one of the first labs to describe an imprinted gene. Paul Vrana, a postdoc of Tilghman's, looked at crosses between two species of mice, one of which had a very high rate of partner change—multiple fathers within a litter—whereas the other was a so-called monogamous mouse, where a single father fathered all the offspring in a litter and the female had about an 80 percent chance of staying with the father to produce the next litter. The researcher predicted that the conflict between maternal and paternal genomes would be more intense in the mouse with multiple paternity than in the monogamous mouse, and in fact, when you cross them you get a dramatic difference in birth weight.

If the father came from the species with multiple paternity, there had been intense selection on paternal genomes to extract more resources from mothers. This paternal genome would be matched against a maternal genome that had not been strongly selected to resist paternal demands. In this direction of the cross, offspring were larger than normal, whereas in the reciprocal cross where the paternal genome came from the monogamous species and the maternal genome from the polyandrous species, offspring were smaller than normal. Paul Vrana was able to show that this difference was largely due to imprinted genes in these two species. This suggests that divergence of imprinted genes may contribute to the speciation process, and in particular that changes in social systems and mating systems can cause changes in the expression of imprint. These can then contribute to reproductive isolation between sister species.

The second bit of work is being done in, of all places, a liver oncology lab at the Duke University Medical Center that is studying genomic imprinting. Out of curiosity, Randy Jirtle and Keith Killian looked at marsupials and then at the platypus—an egg-laying mammal—to see where imprinting arose. They found that imprinting is absent in the platypus, at least for the genes they looked at, but was present in marsupials. Thus, imprinting appears to have arisen more or less coincident with the origin of live birth, before the common ancestor of marsupials and placental mammals. There are some exciting areas of research of that kind.

There are also some other recent intriguing observations out there that beg for a theoretical explanation. There's evidence in the mouse, for example, that the paternal genome particularly favors development of the hypothalamus, whereas the maternal genome favors development of the neocortex. I've suggested that some maternal-paternal conflicts can be seen within the individual between different parts of the brain favoring different sorts of actions. I don't have a good explanation of why that's occurring in the mouse, but I would love to know. At a broader level, perhaps these theories have something to say about the subjective experience of internal conflicts—why we sometimes have great difficulty making up our minds. If the mind were purely a fitness-maximizing computer with a single fitness function, then this paralyzing sense of indecision we often feel would make no sense. When we are forced to make a difficult decision it can sometimes consume all our energies for a day, even though we'd be better off making a decision one way or the other. Perhaps that can be explained as a political argument going on within the mind between different agents with different agendas. That's getting very speculative now, though.

In the future I'd also like to get back to plants. I've put a lot of work into thinking about plant life cycles, and the work that I did in my Ph.D. has had relatively little impact, so I'd like to go back and rethink some of those ideas. I've thought of writing a book called Sociobotany that would do for plants what Trivers, Wilson, and Dawkins did for animal behavior. Botany tends to look at the different stages in the life cycles of a plant as cooperating one with the other. But Trivers's theories of parent-offspring conflict are very relevant to understanding some odd features of seed development and the embryology of plants. One of my favorite examples of this phenomenon can be seen in the seeds of pine trees and their relatives. The seed contains multiple eggs that can be fertilized by multiple pollen tubes, which are the functional equivalent of sperm. Within the seed, multiple embryos are produced that then compete to be the only one that survives in that seed. As this happens there's very intense sibling rivalry and even siblicide going on in the seed. Because of oddities of plant reproduction, the eggs that produce those embryos are all genetically identical one to the other, so all the competition among the embryos is between the genes that they get from their fathers through the pollen tube. Because of this, I expect there to be imprinting in the embryos of pine trees.

Another interesting case is found in Welwitschia, a very odd plant that grows in the Namibian desert. Here, once again because of oddities of the plant's genetics, the egg cells are no longer genetically identical one to the other, and they compete with each other to produce the embryo that survives in that seed. Rather than waiting for the pollen tube to reach the eggs, the eggs grow in tubes up to meet the pollen tubes. There's actually a race to meet the pollen tubes growing down to meet the eggs. Fertilization occurs and then the embryos race back down into the seed to gain first access to the food reserves stored in the seed. This odd behavior was just a strange observation of plant embryologists, but I think the application of ideas of conflict between different genetic individuals gives a very pleasing explanation of why you observe this behavior in Welwitschia but not in other groups where the eggs are genetically identical to each other.

Some of these ideas also intersect with the work of evolutionary psychologists. Although I don't interact with them on a daily basis, they're very keen on my work, and I follow theirs. A true psychology has got to be an evolutionary psychology. Whether every theory that goes under the name of evolutionary psychology is evolutionarily justified is a different question, but in terms of the question whether Darwin is relevant to understanding the mind and human behavior, evolutionary psychologists have got it right. We are evolved beings and therefore our psychology will have to be understood in terms of natural selection, among other factors.


THE COMPUTATIONAL UNIVERSE: SETH LLOYD [10.24.02]

Every physical system registers information, and just by evolving in time, by doing its thing, it changes that information, transforms that information, or, if you like, processes that information. Since I've been building quantum computers I've come around to thinking about the world in terms of how it processes information.



SETH LLOYD is Professor of Mechanical Engineering at MIT and a principal investigator at the Research Laboratory of Electronics. He is also adjunct assistant professor at the Santa Fe Institute. He works on problems having to do with information and complex systems from the very small—how do atoms process information, how can you make them compute, to the very large — how does society process information? And how can we understand society in terms of its ability to process information?

His seminal work in the fields of quantum computation and quantum communications — including proposing the first technologically feasible design for a quantum computer, demonstrating the viability of quantum analog computation, proving quantum analogs of Shannon's noisy channel theorem, and designing novel methods for quantum error correction and noise reduction — has gained him a reputation as an innovator and leader in the field of quantum computing. Lloyd has been featured widely in the mainstream media including the front page of The New York Times, The LA Times, The Washington Post, The Economist, Wired, The Dallas Morning News, and The Times (London), among others. His name also frequently appears (both as writer and subject) in the pages of Nature, New Scientist, Science and Scientific American.


THE COMPUTATIONAL UNIVERSE

SETH LLOYD: I'm a professor of mechanical engineering at MIT. I build quantum computers that store information on individual atoms and then massage the normal interactions between the atoms to make them compute. Rather than having the atoms do what they normally do, you make them do elementary logical operations like bit flips, not operations, and-gates, and or-gates. This allows you to process information not only on a small scale, but in ways that are not possible using ordinary computers. In order to figure out how to make atoms compute, you have to learn how to speak their language and to understand how they process information under normal circumstances.

It's been known for more than a hundred years, ever since Maxwell, that all physical systems register and process information. For instance, this little inchworm right here has something on the order of Avogadro's number of atoms. And dividing by Boltzmann's concept, its entropy is on the order of Avogadro's number of bits. This means that it would take about Avogadro's number of bits to describe that little guy and how every atom and molecule is jiggling around in his body in full detail. Every physical system registers information, and just by evolving in time, by doing its thing, it changes that information, transforms that information, or, if you like, processes that information. Since I've been building quantum computers I've come around to thinking about the world in terms of how it processes information.

A few years ago I wrote a paper in Nature called "Fundamental Physical Limits to Computation," in which I showed that you could rate the information processing power of physical systems. Say that you're building a computer out of some collection of atoms. How many logical operations per second could you perform? Also, how much information could these systems register? Using relatively straightforward techniques you can show, for instance, that the number of elementary logical operations per second that you can perform with that amount of energy, E, is just E/H - well, it's 2E divided by pi times h-bar. [h-bar is essentially 10[-34] (10 to the -34) Joule-seconds, meaning that you can perform 10[-50] (10 to the 50) ops per second.)]If you have a kilogram of matter, which has mc2 — or around 10[17] Joules (10 to the 17) Joules — worth of energy and you ask how many ops per second it could perform, it could perform 10[17] (ten to the 17) Joules / h-bar. It would be really spanking if you could have a kilogram of matter — about what a laptop computer weighs — that could process at this rate. Using all the conventional techniques that were developed by Maxwell, Boltzmann, and Gibbs, and then developed by von Neumann and others back at the early part of the 20th century for counting numbers of states, you can count how many bits it could register. What you find is that if you were to turn the thing into a nuclear fireball — which is essentially turning it all into radiation, probably the best way of having as many bits as possible — then you could register about 10[30] (10 to the 30) bits. Actually that's many more bits than you could register if you just stored a bit on every atom, because Avogadro's number of atoms store about 10[24] (10 to the 24) bits.

Having done this paper to calculate the capacity of the ultimate laptop, and also to raise some speculations about the role of information-processing in, for example, things like black holes, I thought that this was actually too modest a venture, and that it would be worthwhile to calculate how much information you could process if you were to use all the energy and matter of the universe. This came up because back in when I was doing a Masters in Philosophy of Science at Cambridge. I studied with Stephen Hawking and people like that, and I had an old cosmology text. I realized that I can estimate the amount of energy that's available in the universe, and I know that if I look in this book it will tell me how to count the number of bits that could be registered, so I thought I would look and see. If you wanted to build the most powerful computer you could, you can't do better than including everything in the universe that's potentially available. In particular, if you want to know when Moore's Law, this fantastic exponential doubling of the power of computers every couple of years, must end, it would have to be before every single piece of energy and matter in the universe is used to perform a computation. Actually, just to telegraph the answer, Moore's Law has to end in about 600 years, without doubt. Sadly, by that time the whole universe will be running Windows 2540, or something like that. 99.99% of the energy of the universe will have been listed by Microsoft by that point, and they'll want more! They really will have to start writing efficient software, by gum. They can't rely on Moore's Law to save their butts any longer.

I did this calculation, which was relatively simple. You take, first of all, the observed density of matter in the universe, which is roughly one hydrogen atom per cubic meter. The universe is about thirteen billion years old, and using the fact that there are pi times 10[7] (10 to the 7) seconds in a year, you can calculate the total energy that's available in the whole universe. Remembering that there's a certain amount of energy, you then divide by Planck's Constant — which tells you how many ops per second can be performed — and multiply by the age of the universe, and you get the total number of elementary logical operations that could have been performed since the universe began. You get a number that's around 10[120] (10 to the 120). It's a little bigger — 10[122] (10 to the 122) or something like that — but within astrophysical units, where if you're within a factor of one hundred, you feel that you're okay;

The other way you can calculate it is by calculating how it progresses as time goes on. The universe has evolved up to now, but how long could it go? One way to figure this out is to take the phenomenological observation of how much energy there is, but another is to assume, in a Guthian fashion, that the universe is at its critical density. Then there's a simple formula for the critical density of the universe in terms of its age; G, the gravitational constant; and the speed of light. You plug that into this formula, assuming the universe is at critical density, and you find that the total number of ops that could have been performed in the universe over time (T) since the universe began is actually the age of the universe divided by the Planck scale — the time at which quantum gravity becomes important — quantity squared. That is, it's the age of the universe squared, divided by the Planck length, quantity squared. This is really just taking the energy divided by h-bar, and plugging in a formula for the critical density, and that's the answer you get.

This is just a big number. It's reminiscent of other famous big numbers that are bandied about by numerologists. These large numbers are, of course, associated with all sorts of terrible crank science. For instance, there's the famous Eddington Dirac number, which is 10[40] (10 to the 40). It's the ratio between the size of the universe and the classical size of the electron, and also the ratio between the electromagnetic force of, say, the hydrogen atom, and the gravitational force on the hydrogen atom. Dirac went down the garden path to try to make a theory in which this large number had to be what it was. The number that I've come up with is suspiciously reminiscent of (10[40])[3] (10 to the 40, quantity cubed). This number, 10[120], (10 to the 120) is normally regarded as a coincidence, but in fact it's not a coincidence that the number of ops that could have been performed since the universe began is this number cubed, because it actually turns out to be the first one squared times the other one. So whether these two numbers are the same could be a coincidence, but the fact that this one is equal to them cubed is not.

Having calculated the number of elementary logical operations that could have been performed since the universe began, I went and calculated the number of bits, which is a similar, standard sort of calculation. Say that we took all of this beautiful matter around us on lovely Eastover Farm, and vaporized it into a fireball of radiation. This would be the maximum entropy state, and would enable it to store the largest possible amount of information. You can easily calculate how many bits could be stored by the amount of matter that we have in the universe right now, and the answer turns out to be 10[90] (10 to the 90). This is necessary, just by standard cosmological calculations — it's (10[120])[3/4] (10 to the 120, quantity to the 3/4 power). We can store 10[90] (10 to the 90) bits in matter, and if one believes in somewhat speculative theories about quantum gravity such as holography — in which the amount of information that can be stored in a volume is bounded by the area of the volume divided by the Planck Scale squared — and if you assume that somehow information can be stored mysteriously on unknown gravitational degrees of freedom, then again you get 10[120] (10 to the 120). This is because, of course, the age of the universe squared divided by the Planck length squared is equal to the size of the universe squared divided by the Planck length. So the age of the universe squared, divided by the Planck time squared is equal to the size of the universe divided by the Planck length, quantity squared. So we can do 10[120] (10 to the 120) ops on 10[90] (10 to the 90) bits.

I made these calculations not to suggest any grandiose plan or to reveal large numbers, although of course I ended up with some large numbers, but I was curious what these numbers were. When I calculated I actually thought that these can't be right because they are too small. I can think of much bigger numbers than 10[120] (10 to the 120). There are lots of bigger numbers than that. It was fun to calculate the computational capacity of the universe, but I wanted to get at some picture of how much computation the universe could do if we think of it as performing a computation. These numbers can be interpreted essentially in three ways, two of which are relatively uncontroversial. The first one I already gave you: it's an upper bound to the size of a computer that we could build if we turned everything in the universe into a computer running Windows 2540. That's uncontroversial. So far nobody's managed to find a way to get around that. There's also a second interpretation, which I think is more interesting. One of the things we do with our quantum computers is to use them as analog computers to simulate other physical systems. They're very good at simulating other quantum systems, at simulating quantum field theories, at simulating all sort of effects, down to the quantum mechanical scale that is hard to understand and hard to simulate classically. These numbers are a lower limit to the size of a computer that could simulate the whole universe, because to simulate something you need at least as much stuff as is there. You need as many bits in your simulator as there are bits registered in the system if you are going to simulate it accurately. And if you're going to follow it step by step throughout its evolution, you need at least as many steps in your simulator as the number of steps that occur in the system itself. So these numbers, 10[120] (10 to the 120) ops, 10[90] (10 to the 90) bits of matter —10[120] if you believe in something like holography ­ also form a lower bound on the size of a computer you would need to simulate the universe as a whole, accurately and exactly. That's also uncontroversial.

The third interpretation, which of course is more controversial, arises if we imagine that the universe is itself a computer and that what it's doing is performing a computation. If this is the case, these numbers say how big that computation is — how many ops have been performed on how many bits within the horizon since the universe began. That, of course, is more controversial, and since publishing this paper I've received what is charitably described as "hate mail" from famous scientists. There have been angry letters to the editor of Physical Review Letters. "How dare you publish a paper like this?" they say. Or "It's just absolutely inconceivable. The standards have really gotten low." Thinking of the universe as a computer is controversial. I don't see why it should be so controversial, because many books of science fiction have already regarded the universe as a computer. Indeed, we even know the answer to the question it's computing — it's 42. The universe is clearly not a computer with a Pentium inside. It's not an electronic computer, though of course it operates partly by quantum electro-dynamics, and it's not running Windows — at least not yet. Some of us hope that never happens — though you never can tell — if only because you don't want the universe as a whole to crash on you all of a sudden. Luckily, whatever operating system it has seems to be slightly more reliable so far. But if people try to download the wrong software, or upgrade it in some way, we could have some trouble.

So why is this controversial? For one, it seems to be making a statement that's obviously false. The universe is not an electronic digital computer, it's not running some operating system, and it's not running Windows. Why does it make sense to talk about the universe as performing a computation at all? There's one sense in which it's actually obvious that the universe is performing a computation. If you take any physical system — say this quarter, for example. The quarter can register a lot of information. It registers each atom in it, has a position which registers a certain amount of information, has some jiggling motion which registers a few bits of information, and can be heads or tails. Whether it's heads or tails in the famous flipping a coin is generating a famous bit of information — unless it's Rosenkranz and Guildenstern Are Dead, in which case it always comes up heads. Because the quarter is a physical system, it's also dynamic and evolves in time. Its physical state is transforming. It's easier to notice if I flip it in the air — it evolves in time, it changes, and as it changes it transforms that information, so the information that describes it goes from one state to another — from heads to tails, heads to tails, heads to tails — really fast. The bit flips, again and again and again. In addition, the positions, momentum, and quantum states of the atoms inside are changing, so the information that they're registering is changing. Merely by existing and evolving in time — by existing — any physical system registers information, and by evolving in time it transforms or processes that information.

It doesn't necessarily transform it or process it in the same way that a digital computer does, but it's certainly performing information­processing. From my perspective, it's also uncontroversial that the universe registers 10[90] bits of information, transforms and processes that information at a rate which is determined by its energy divided by Planck's constant. All physical systems can be thought of as registering and processing information, and how one wishes to define computation will determine your view of what computation consists of. If you think of computation as being merely information-processing, then it's rather uncontroversial that the universe is computing, but of course many people regard computation as being more than information-processing. There are formal definitions of what computation consists of. For instance, there are universal Turing machines, and there is a nice definition that's now 70-odd years old of what it means for something to be able to perform digital computation. Indeed, the kind of computers we have sitting on our desks, as opposed to the kinds we have sitting in our heads or the kind that were in that little inchworm that was going along, are universal digital computers. So information-processing where a physical system is merely evolving in time is a more specific, and potentially more powerful kind of computing, because one way to evolve in time is just to sit there like a lump. That's a perfectly fine way of evolving in time, but you might not consider it a computation. Of course, my computer spends a lot of time doing that, so that seems to be a common thing for computers to do.

One of the things that I've been doing recently in my scientific research is to ask this question: Is the universe actually capable of performing things like digital computations? Again, we have strong empirical evidence that computation is possible, because I own a computer. When it's not sitting there like a lump, waiting to be rebooted, it actually performs computation. Whatever the laws of physics are, and we don't know exactly what they are, they do indeed support computation in the form of existing computers. That's one bit of empirical evidence for it.

There's more empirical evidence in the form of these quantum computers that I and colleagues like Dave Cory, Tai Tsun Wu, Ike Chuang, Jeff Kimball, Dave Huan, and Hans Mooij have built. They're actually computers. If you look at a quantum computer you don't see anything, because these molecules are too small. But if you look at what's happening in a quantum computer, it's actually attaining these limits that I described before, these fundamental limits of computation. I have a little molecule, and each atom in the molecule registers a bit of information, because spin up is zero, spin down is one. I flip this bit, by putting it in an NMR spectrometer, zapping it with microwaves and making the bit flip. I ask, how fast does that bit flip, given the energy of interaction between the electromagnetic field I'm putting on that spin and the amount of time it takes to flip? You find out that the bit flips in exactly this time that's given by this ultimate limit to computation. I take the energy of the interaction, divide by h-bar ­ if I want, I can make it more accurate, multiplying it by two over pi times h-bar ­ and I find that that's exactly how fast that this bit flips. Similarly, I can do a more complicated operation, like an exclusive or-operation where, if I have two spins, I make this one flip if and only if this spin is spun out, and then the other one flips. It's relatively straightforward to do. In fact, people have been doing it since 1948, and if they'd thought of building quantum computers in 1948 they could have, because they actually already had the wherewithal to do it. When this happens — and it's indeed the sort of thing that happens naturally inside an atom — it also takes place at the limits that are given by this fundamental physics of computation. It goes exactly at the speed that it's allowed to go and no faster. It's saturating its bound for how fast you can perform a computation.

The other neat thing about these quantum computers is that they're also storing a bit of information on every available degree of freedom. Every nuclear spin in the molecules stores exactly one bit of information. We have examples of computers that saturate these ultimate limits of computation, and they look like actual physical systems. They look like alanine molecules, or amino acids, or like chloroform. Similarly, when we do quantum computation using photons, etc. we also perform computation at this limit.

I have not proved that the universe is, in fact, a digital computer and that it's capable of performing universal computation, but it's plausible that it is. It's also a reasonable scientific program to look at the dynamics of the standard model and to try to prove from that dynamics that it is computationally capable. We have strong evidence for this case. Why would this be interesting? For one thing it would justify Douglas Adams and all of the people who've been saying it's a computer all along. But it would also explain some things that have been otherwise paradoxical or confusing about the universe. Alan has done work for a long time on why the universe is so homogeneous, flat, and isotropic. This was unexplained within the standard cosmological model, and your great accomplishment here was to make a wonderful, simple, and elegant model that explains why the universe has these existing features. Another feature that everybody notices about the universe is that it's complex. Why is it complicated? Well nobody knows. It turned out that way. Or if you're a creationist you say God made it that way. If you take a more Darwinian point of view the dynamics of the universe are such that as the universe evolved in time, complex systems arose out of the natural dynamics of the universe. So why would the universe being capable of computation explain why it's complex?

There's a very nice explanation about this, which I think was given back in the '60s, and actually Marvin, maybe you can enlighten me about when this first happened, because I don't know the first instance of it. Computers are famous for being able to do complicated things starting from simple programs. You can write a very short computer program which will cause your computer to start spitting out the digits of pi. If you want to make it slightly more complex you can make it stop spitting out those digits at some point so you can use it for something else. There are short programs that generate all sorts of complicated things. That in itself doesn't constitute an explanation for why the universe itself exhibits all this complexity, but if you combine the fact that you have something that's dynamically, computationally universal with the fact that you're constantly having information injected into the universe, — by the basic laws of quantum mechanics, full of quantum fluctuations are all the time injecting, programming the universe with bits of information — then you do have a reasonable explanation, which I'll close with.

About a hundred and twenty years ago, Ludwig Boltzmann proposed an explanation for why the universe is complex. He said that it's just a big thermal fluctuation. His is a famous explanation: the monkeys-typing-on-typewriters explanation for the universe. Say there were a bunch of monkeys typing a bunch of random descriptions into a typewriter. Eventually we would get a book, right? But Boltzman among other people realized right away that this couldn't be right, because the probability of this happening is vanishingly small. If you had one dime that assembled itself miraculously by a thermal fluctuation, the chances of finding another dime would be vanishingly small; you'd never find that happening in the same universe because it's just too unlikely.

But now let's turn to this other metaphor, which I want help from Marvin with. Now the monkeys are not typing into a typewriter, but into a computer keyboard. Let's suppose this computer is accepting what the monkeys are typing as instructions to perform computational tasks. This means that, for instance, because there are short programs for producing the digits of pi you don't need that many monkeys typing for that long until all of a sudden pi is being produced by the computer. If you got a monkey that's managed to produce a program to produce a dime, then all it has to do is hit return and it's got two dimes, right? Monkeys are probably pretty good at hitting return. There's a nice theory associated with this called algorithmic information theory, which says that if you've got monkeys typing into a computer the fact is that anything that can be realistically described by a mathematical equation, by a computer computing things, will at some point show up for these monkeys. In the monkey-typing-into-the-computer universe, all sorts of complex things arise naturally by the natural evolution of the universe.

I would suggest, merely as a metaphor here, but also as the basis for a scientific program to investigate the computational capacity of the universe, that this is also a reasonable explanation for why the universe is complex. It gets programmed by little random quantum fluctuations, like the same sorts of quantum fluctuations that mean that our galaxy is here rather than somewhere else. According to the standard model billions of years ago some little quantum fluctuation, perhaps a slightly lower density of matter, maybe right where we're sitting right now, caused our galaxy to start collapsing around here. It was just a little quantum fluctuation, but it programmed the universe and it's important for where we are, because I'm very glad to be here and not billions of miles away in outer space. Similarly, another famous little quantum fluctuation that programs you is the exact configuration of your DNA. The program takes strands of DNA from your mother and from your father, splits them up, and wires them together, recombines them. This is a process that has lots of randomness in it, as you know if you have siblings. If you trace that randomness down, you find that that randomness is actually arising from little quantum fluctuations, which masquerade as thermal and chemical fluctuations. Your genes got programmed by quantum fluctuation. There's nothing wrong with that, nothing to be ashamed of — that's just the way things are. Your genes are very important to you, and they themselves form a kind of program for your life, and how your body functions.

In this metaphor we actually have a picture of the computational universe, a metaphor which I hope to make scientifically precise as part of a research program. We have a picture for how complexity arises, because if the universe is computationally capable, maybe we shouldn't be so surprised that things are so entirely out of control.


10 THINGS YOU SHOULD KNOW ABOUT SECURITY, PRIVACY AND ENCRYPTION: RICHARD M. SMITH


Introduction

Richard Smith is one of the nation's most outspoken privacy mavens, with a difference. Smith is a veteran software hacker who who has a deep understanding of both computers and the Internet. He uses his expertise in Sherlock Holmes fashion to ferret out privacy and security flaws and abuses. Smith is a personal computer industry veteran. He recalls meeting Bill Gates in the 1970s when the two men attended a meeting in Kansas City to establish a standard for PC data storage on tape recorders.

John Markoff, Technology Correspondent, The New York Times

RICHARD M. SMITH has been described by The New York Times as "perhaps the nation's most vocal authority on data privacy." Smith has been in the computer business since the early 70s, and has been involved in microprocessors from day one. He began his career as a programmer, co-founded a software company, and became the head of the nonprofit Privacy Foundation, where he served until November, 2001. Since September 11, he has changed his focus from privacy to security. He is now focuses on technology related to security issues and he operates a web site that reports "computer bites man" stories, named ComputerBytesMan.com. He lives and works in Brookline, Massachusetts.


10 THINGS YOU SHOULD KNOW ABOUT SECURITY, PRIVACY AND ENCRYPTION

1. A lot of the concerns about cyber-security and cyber-terrorism are overblown. A relatively simple solution to many of these problems is to design security into products from the beginning rather than having to come up with retrofits on top of them to fix problems that may arise. It's just like the car market. Until the '60s, governments were not really involved in car design. Then people like Ralph Nader started noticing that a lot of people were being killed in cars and made it clear why this was happening. We have spent the last 35 years or so designing safety into cars, and it's had a pretty dramatic effect. Now the number of people that are killed on the highways every year is lower than it was in 1965, even though both the number of people who drive and the total miles driven is much higher. We're in that same era now with security on computer systems. We know we have a problem and now we need to focus on design.

2. There's a big push now to use facial recognition systems to catch terrorists at airports. The idea is that if you're a bad guy you would be in a data base, and we would have video cameras taking pictures of people all day long, trying to match them with those in the data base. Being a natural skeptic about high technology I have a lot of problems with such systems. This technology is very much mixed in with business and the people who run it. There are a couple of companies producing the technology, and after September 11 they saw a way to make billions of dollars, so they're offering it as a solution to the terrorist problem in ways that are totally exaggerated.

3. Lots of people who look at security problems seem to focus on web servers. But at the same time we have to look at desktop products — products that people use every day — like web browsers and e-mail. A problem in one of them can potentially affect everyone.

4. One of the issues I've noticed repeatedly in the culture of programming is that, frankly, security and product quality are of secondary importance to people writing code. For them it's a waste of time. They're more interested in creating great new features in the software. Security is really about getting people to do the right thing, not to be lazy. I'm constantly selling them on the idea of the importance of fixing this particular problem. Usually what it takes is for something bad to happen, and then they realize that they've got a problem because they look bad in the press. The fundamental problem is that there's very little liability for software problems. If there were we'd clearly be in a different world, since in a sense, the market would fix the problems a lot more quickly.

5. Parallel to Moore's Law is a development which nobody's ever identified: namely, that the capacity of hard disks is growing even faster than the number of transistors they can put on a chip. Now we have all of these hard disks that need to be filled up with data. God abhors a vacuum, and he also abhors an empty hard disk. Surveillance systems are being pulled along by this increase in available technology. We can record more stuff, so why not do it? The problem is that people running the Fast Lane system on the Massachusetts Turnpike probably have never thought about what to do with all these records they are accumulating, other than to try to keep them. They know not to give them out to just anybody who comes along, but have they ever thought about how long to keep the data, and how the data could be misused?

6. At the Privacy Foundation we focused on surveillance and on understanding how technology would be used to watch us. The word "surveillance" generally has a negative connotation. But at the same time, we are surveilled every day of our lives and we don't mind it in the least. The first example of a type of surveillance that we saw on a wide scale was credit cards. We travel around and buy things with credit cards and over time they have become more and more popular. In order to make the whole system work, every time we make an economic transaction a record has to be made of who we did business with, what we did with them, how much money we spent, where we were physically located, and what time this was — all of which goes into a computer data base. This surveillance system exists to enhance commerce, and since we don't have to carry around a lot of cash, it makes it easier to buy and sell things. We very willingly participate in that.

7. In the future, much more of our lives is going to be recorded in computers, since hard disks are getting very inexpensive and sensors that watch what we do are becoming more common. We're seeing all sorts of ways in which what used to be anonymous transactions suddenly become recorded in order to make our lives work better. But at the same time there is a down side. With the E911 system, for example, the FCC is going to require wireless companies to locate where an individual cell phone user is to an accuracy of about a hundred feet. The rationale is that when you make an emergency call to the fire or the police department, they want to be able to know where you are. Most people agree that that's a good thing — if I'm dazed from in a car accident and not sure where I am, it's great that someone can find me. But then we have to think about some of the other uses. This technology will roll out over the next five years, and constitutes a government mandated surveillance program in which everyone who wants to use a cell phone is going to have to participate. It's just another example of the technology that's being developed out there in order to watch us more. Police are going to start using this as a poor man's tracking system to watch where we go. There are a lot of possibilities for this technology beyond what's being stated.

8. If we had to pay ten cents per e-mail sent, or even five cents, then the spam problem would disappear because the spammers couldn't afford to do what they do. They're taking advantage of the fact that it's easy and cheap to send. You get into problems with spam when you start signing up for sweepstakes or situations where things are given away. Most legitimate companies tend not to give out your e-mail address. If you give it to the New York Times, for example, they'll make it fairly clear how they're going to use it. You just want to be very careful and check off all the boxes so you don't get all the extra stuff.

9. I'm a business person, and when I look at recent proposals for download systems for media files, I just don't see them flying. The media companies are clearly trying to use encryption to move away from a sales model to a leasing model. They want to get us on a gravy train that we can't get off of. When you buy a CD today, that's pretty much the end of your relationship to the music publisher. You buy the CD, you play it as much as you want, and there's nobody else there to control you. With these new systems, though, you would download the music, and the music would expire unless you kept up your subscription. In a way it's almost like they want to create a music tax system. But people want to have something they hold in their hands.

10. I've been involved with the case of Dmitry Sklyarov, who is a Russian programmer who figured out a way to convert the Adobe e-book format into PDF files. This led me to begin researching the e-book business and the problems it has. One of the conclusions that I came up with is that the idea of downloading a book is a) too complicated for people, and b) not something they understood very well. To get people to start using e-books, they have to see the advantages of them. If you put an e-book on a 3-inch CD, then you could put it inside a physical book. You start getting people interested in e-books by first including them with the regular book. Trying to force consumers to download e-books was a mistake. And the publishers also — very importantly — pissed off the retail channel by saying that customers should go directly to the downloads, bypassing the stores. This totally ignores the fact that people like to browse bookstores, and it would have been a great place to introduce people to e-books. It was just all wrong — the perfect example of how not to do it right.

 


|Top|