[most recent first]
From: Stuart Hameroff and Paola Zizzi
We read with great interest the piece by Seth Lloyd and accompanying commentaries. Of keenest interest was the comment that perhaps "the ultimate internet is spacetime itself".
This startling idea has been put forth previously in several contexts. For example one of us (PZ) has expressed the idea that "the Quantum Growing Network (QGN), which describes the quantum early universe, can be considered as a model for the future quantum WWW (gr-qc/0103002)." The QGN is a kind of "cosmic ultimate Internet", which processes fundamental information encoded in quantum space-time, and performs ultimate computation, in the sense that it saturates the quantum limits to computation at each time step. And Roger Penrose has suggested that Platonic information resides at the fundamental Planck scale of spacetime geometry. Lee Smolin, one of the commentators, has developed Penrose's original concept of quantum spin networks as the basis for Planck scale quantum geometry. In his book Life of the Cosmos, Lee portrayed spin networks as an almost living, evolving bed of quantum information.
Here's the point. Jaron Lanier brought in the notions of phenomenal experience and semantics, apprently code words for the (STILL??!!) taboo term consciousness. Roger Penrose and one of us (SH) have proposed that Planck scale geometry includes fundamental components of phenomenal experience (what philosophers call 'qualia') along with Platonic values. We have further suggested a model of consciousness based on quantum computation in microtubules within the brain's neurons which access and select (by Penrose's quantum gravity "objective reduction") particular patterns of fundamental spacetime geometry.
Of course most physicists (and Philip Anderson is most likely among them) ridicule the idea that significant quantum states can exist in the warm, wet brain. This is especially true in light of the work of Max Tegmark who attempted to "bury alive" the idea of quantum consciousness by showing that microtubule quantum states would decohere far too rapidly (10^-13 sec) to be relevant to brain function. However Tegmark attacked a straw man of his own making, and the actual Penrose-Hameroff model is quite consistent with microtubule decoherence times in neurophysiological ranges of hundreds of milliseconds.
experience and semantics (consciousness)
may indeed connect our brains
to nature's WWW, the "UUU"
(ultimate underlying universe).
From: Antony Valentini
It is certainly interesting and worthwhile to consider what the ultimate limits to computation might be. But it is always difficult to prove a negative statement. I suspect that Seth Lloyd's analysis may have practical value in a certain restricted context, but that it does not really provide us with any ultimate limit.
analysis assumes that the
rate of flow of time is the
same for the system doing
the computation and the experimenter
reading the final result.
This need not be so in general
relativity: if the computer
and the experimenter start
off next to each other and
then part company, following
different worldliness passing
through very different gravitational
fields, the computer's internal
time can be speeded up relative
to the experimenter's. So
a computation that takes the
computer ten years could become
available to the experimenter
in just ten minutes.
Another point is that any analysis such as Llloyd's inevitably assumes that we know what the ultimate laws of physics are or at least, assumes that certain physical principles, dearly cherished today, will always be true.
For instance, in Lloyd's analysis it is assumed that the speed of light is a fundamental constant, providing an ultimate limit on, in Lloyd's words, "how fast information can get from one place to another". While this assumption is a cornerstone of modern physics, it might not remain so. In recent years, for instance, in order to solve certain problems in the very early universe, some workers have introduced the hypothesis that the speed of light increases at high energies. Presumably, this would allow computation faster than Lloyd's limit.
While the supposed effect, if it exists at all, is expected to occur only at energies obtainable in the very early universe, and so is unlikely to be of practical import, nevertheless it serves as an example of how changes in the laws of physics could radically change our view of what the ultimate limits to computation are. Indeed if, as some think, the speed of light becomes arbitrarily large at large energies, then presumably there will be no upper limit at all on the speed of computation.
In a similar vein, current thinking about the limits of computation assumes that quantum theory is final that Planck's constant and the uncertainty principle set ultimate limits to, as Lloyd puts it, "how small things can actually get before they disappear altogether". But quantum theory is a statistical theory, and some (including myself) believe that there is a deeper theory that explains the occurrence of every seemingly-random quantum event. At least one such "hidden-variables theory" the pilot-wave theory of de Broglie and Bohm has been extensively studied. It reproduces quantum theory, whenever the hidden parameters are assumed to begin with a certain "quantum equilibrium" distribution.
Now such theories are nonlocal: we know from Bell's theorem that they have to be, in order to reproduce the "spooky" EPR correlations. In other words, in such theories, at the fundamental level there are instantaneous influences propagating faster than light, indeed infinitely so. At the quantum level, these effects are hidden by the statistical noise inherent in the assumed "equilibrium" distribution. But I have proposed that the universe might have started out in a state of "quantum nonequilibrium", in which probabilities differed from those of quantum theory and superluminal signalling was possible. In this scenario, relaxation to quantum equilibrium occurred during the great violence of the big bang; but there might be exotic forms of matter left over from the very early universe that are still "out of equilibrium" today. If such matter could be found, it would violate quantum theory, and we could use it to trasmit information faster than the speed of light.
matter" would revolutionise
computation. Some years ago
I showed (in pilot-wave theory)
that we could use such matter
to extract all the
results of a parallel quantum
computation. For at the hidden-variable
level, the trajectory of the
computer contains information
from all the branches of the
wavefunction (or pilot wave)
guiding it. The results of
the parallel computations
are encoded in the different
branches, and may be read
by analysis of the trajectory.
From: Philip W. Anderson
Date: July 20, 2001
At the risk of appearing an antidigerati if that is the singular of digerati I have to differ strongly from Seth Lloyd's remarks. Having fought against the arrogant imperialism of the particle physicists "the Theory of Everything is the theory of particles, so we are the only fundamental guys," and being subject at the present to the cosmologists' attitude that since the universe is by definition everything, they are the only (fundamental) game in town, along comes Seth saying "everything is information processing, so we information scientists pick up all the chips."
The wonderful thing about the world is its diversity in all manner of ways in scale, in complexity, in mechanism, in driving motivation, in its sheer richness of phenomena. We happen to be lucky enough to live on a small planet which is particularly highly diverse let's not throw that away and say it's all information processing. In particular there is a tendency at the moment to reduce biology to "bioinformatics", the genome as Turing machine tape. But life is not just the genome. A thing is not living unless it is an individual, unless it has means for controlling energy and matter flows to its own advantage, and probably yet more requirements. Art Iberall may have been the first to see life in terms of overlapping cycles and functions, and Stu Kauffmann has recently expressed some of this at length. The point is, the information processing metaphor gets you nowhere unless you have the physical answer to "how?" and the motivational answer to "for what purpose?" and the mechanistic answer to "how do we do it in the right time and place?"
As far as I can see, Seth is indulging himself in some very sophisticated entropy calculations, assuming that all the entropy is available as "information". (Actually, I'm not sure they are too sophisticated but they are spectacular.) In fact, we always so far have found that it is a valid assumption that useful information is a negligible fraction of the physical entropy flows, and I suspect it is exactly for the above reasons: to use information in any meaningful sense we need a macroscopic object which is big enough to be rigid and warm enough to be irreversible, at the very least. There may be other ways to manage it but they haven't been evolved yet thankfully!
From: Lee Smolin
Date: July 17, 2001
I like Seth Lloyd's optimism and playfulness, but I must admit I am skeptical about claims such as "Lloyd's Hypothesis" that "everything that's worth understanding about a complex system, can be understood in terms of how it processes information". I am worried even more by Seth's statement that, "Just by existing, all physical systems register information, just by evolving their own natural physical dynamics, they transform that information, they process it."
Implicit in these statements seems to be a radical assumption, which is that the evolution of any physical system can be understood to be a computation. It would certainly be interesting if this were true, but to the extent to which I understand them, these claims seem to be either trivially false or almost trivially true, depending on whether by computation you mean something carried out by an ordinary computer or a quantum computer.
Certainly it is false that the laws of classical physics can be equated to an ordinary computation, i.e. to the operation of a Turing machine. The laws of classical physics involve an infinite number of continuous degrees of freedom; no one can claim that this can be equivalent to a finite algorithm which takes finite strings of zeros and ones to other such strings. One might argue that the laws of classical physics can be arbitrarily well approximated by a computer, but there are counter examples, let me refer to the discussion following Freeman Dyson's comment for critiques of this. In any case, the world is quantum and it turns out that ordinary computers are very bad at simulating quantum dynamics.
So Seth's claims must be referring to quantum computation. Now it is true, and Seth and others have supplied proofs, that a quantum computer can simulate any quantum system with a finite dimensional Hilbert space. But this theorem is not as surprising as it sounds at first. It turns out to be a basic consequence of the two basic principles of quantum theory, the uncertainty principle and the superposition principle. From these one can show that any evolution operator in a low dimensional state space can be constructed from a small number of basic operations. The theorem follows by taking these to be the gates of a quantum computer. The statement only sounds surprising if one forgets that there is a profound difference between classical and quantum mechanical notions of state, information and computation.
The point is that quantum information involves non-classical features such as superposition and entanglement, that make it very different from Shannon information. Thus, even given the equivalence between quantum evolution and quantum computation, it remains the case that the set of quantum systems that (in some basis) compute a classical algorithm is of measure zero. The fact that there are claims which are trivially false when referring to classical information and almost trivially true when referring to quantum information should make us wary about using the word information in a way that confuses the two.
I also have the impression that the confusion of evolution with computation comes from taking approximations too seriously when building models of computers. A complete, quantum field theoretic description of a real classical or quantum computer would look nothing like the simple models which reduce the observables to those relevant for modeling a complicated piece of material as a logic circuit. The danger of relying on such models is shown by the fact that one can break codes, proved to be unbreakable in models, when they are implemented on real chips, by measuring observables not included in the simple models.
Beyond this, the assertion that physics can be described in terms of classical information, so that time evolution is a computation, seems to me to be a possible category error. The Shannon definition of information requires a context in which there is a sender and a receiver. Given this it is meaningful to talk about the quantity of information sent from one to the other along a channel. Similarly, a classical computation is normally defined to be a realization of an algorithm, and an algorithm is not a random thing found in nature, it is something designed to realize an intention. Thus, there must be a programmer and a user who implement a prespecified algorithm, provide the input and receive the output. It is far from clear to me that information and computation are meaningful terms outside of such contexts. When people talk of information and computation as if they were things existing objectively out there in the world-in the same category as atoms and energy I find myself stumped. I don't understand what it means to call a physical process a computation if one does not know and in many cases cannot know whether it is implementing any prespecified algorithm. I wonder whether they are succumbing to the common fallacy of interpreting physics in terms that require the existence of some universal god-like observer.
Could this category error be avoided by referring to quantum rather than classical information? Certainly not on the present day theories, based on quantum field theories with infinite numbers of degrees of freedom, and certainly not on the conventional interpretations of quantum theory, which do not require that every physical process have a sender and a receiver.
Looking to the future however, it cannot be excluded that the fundamental laws of physics will turn out to have a form that allows a description in terms of processing quantum information. There are suggestions, stemming from Bekenstein's bound and 't Hooft's holographic hypothesis, that some geometrical quantities can be reduced to measures of information. Some people in quantum gravity, such as David Finkelstein, Fotini Markopoulou and others have proposed formulations of quantum cosmology in which the history of the universe is constructed from processes in which finite amounts of quantum information flow from event to event. But a complete reformulation of physics in terms of information will require a true revolution. For Seth Lloyd and others to speak as if this has already taken place is as if Copernicus had asserted that Newton's concept of force was already the central concept in physics.
From: John McCarthy
Seth Lloyd's interview about the ultimate limits on computer performance does not belong under the category of "Rebooting civilization".
It isn't lack of compute power that is killing off dot coms; it's lack of good ideas about what to do with the compute power.
Lloyd doesn't mention artificial intelligence, but AI is where advances are required if computers are really to "reboot civilization." Robot domestic servants even for the poor will revolutionize society, but we are far from being able to program them.
present developments in computer
speed, communication speed
and the internet are useful
but not revolutionary.
I offer the following importance scores for developments in human communication.
From: Jaron Lanier
Suppose one had an ultimate laptop. How much effort would it take to program it? How hard would it be to de bug it?
The great thing about the tiny computers of the mid-twentieth century was that a single person could write a great program on one of them. A good example would be Ivan Sutherland's Sketchpad program from the mid 1960s.
Now that computers are a million times more capable, there is almost never a useful program that isn't written on top of other programs, and indeed more effort usually goes into coping with software legacy issues than inventing the new stuff.
If Nature is a big computer, then Evolution might give us a hint of how long it takes to write a big program in it. I'm not sure if the metaphor really holds, but humor me. The answer: Billions of years.
Well, maybe. It depends on what you mean by "Big".
A liter of bacteria-rich garden soil and a liter of a living mammalian brain both contain cells exchanging information. I have seen claims that the number of biologically meaningful signals per second in each case is similar. I am not a soil expert, so I don't know if this is true, but suppose that the claim is within a few orders of magnitude of being true.
Then we could say that evolution only took a fraction of a billion years to write a great program on Earth (with the appearance of bacteria), and that the intervening years leading to the appearance of humans have just a been a value-free random walk.
As a biased observer, I think of my brain as doing more important processing than a similar volume of garden soil, even if the number of events is the same, from a thermodynamic point of view.
We now arrive at that mysterious cop-out known as "Semantics". No one really can say what that is. One reasonable guess is that semantics has something to do with causal potential.
It seems fair to say that not all information processing is equally important. Some information processing events are almost certainly doomed to be irrelevant to the future, while others are very likely to cause significant effects. For instance, it is extremely unlikely for any one bacteria's signal in a liter of soil to have any effect on the future. Even if such a signal might potentially be measured in isolation in an experiment, in practice it's effects are almost always lost in a statistical gloss. A biologically similar signal in a person's neuron, on the other hand, might very well cause a war, or a seduction, or the crash of a dot com. At least there's a vastly greater likelihood of an effect on the future when certain bits switch in a mind.
What makes these two events different, other than the tautological observation that in their specific contexts they have differing potentials for changing the future? One hint is that the pattern of cell signals in a brain appeared much later in the process of evolution than the pattern of cell signals in soil. The brain is built on many layers of biological legacy and interacts with the world on a greater variety of emergent layers of description.
A bit with semantic relevance implies a mis-en-scene in which many other bits are lost in the noise.
Seth Lloyd seems to be more of a syntax man than a semantics man. Here is a quote that demonstrates this:
"Somewhere along the line we developed natural language, which is a universal method for processing information. Anything that you can imagine is processed with information and anything that could be said, can be said using language."
Human communication with other humans, and with the rest of the world, is a full sensory experience, and must be understood on many levels of description. It's not even clear that linguistic expression can work as an entirely freestanding phenomenon.
It even turns out that if you rely on language as the sole interface for computers, people can't use them once they grow beyond a certain scale, and we crossed that boundary in our Moore's Law-paced engineering adventures about a decade ago. Now we must now rely on visual/spatial interfaces mixed with linguistic ones, leading interface software, such as Windows, to be much more complicated and less reliable.
The field of quantum computing still conceives of future quantum computers as ever bigger calculators, but if and when the machines scale, they will probably have to interact with the larger world on a broader basis, and their programs might not merely be simple algorithms. Yes, it is a sad fact that big computers often have to also be complex in order to usefully interface with their surroundings.
I would love to be able to say something formal about the minimum amount of time it takes to program a giant computer usefully; to make semantic leaps, from soil to brain. If exhaustive configuration-space searches are what must happen to program the ultimate laptop, then it would probably be necessary to artificially make the laptop something less than ultimate in order to write software reasonably quickly. This is because algorithms to exhaustively search configuration spaces do not scale pleasantly. Since there are a lot of variables in this thought experiment, I have to confess I'm not sure how ugly the scaling would be, but it's easy to imagine the task overwhelming the powers of even the ultimate laptop to program itself within a period of time comparable to the age of our universe in a way that was interesting enough to justify even a cheap thought experiment, much less a real physical one.
One way to accomplish a reduction of a configuration search (so that it is no longer exhaustive, but still fruitful) is by creating a hierarchy of legacies that essentially store the results of intermediate searches. This type of reduction might even turn out to be unavoidable in sufficiently large computers. Maybe it has something to do with semantics.
A formal theory of legacies would give us a sense of how well software might be able to catch up with hardware's Moore's Law pacing. As I've argued elsewhere (see the .5 Manifesto), software has so far not been improving at nearly the speed of hardware. In Evolution, we see reproducing organisms apparently appearing fairly early in the history of our planet, but multi-cellular creatures only emerging much later. Once higher levels of order began to appear, however, there seems to have been a process of acceleration. So perhaps software quality can also accelerate, but only at a much slower rate than is suggested by Moore's Law.
| Top |