" I can repeat the question but am I bright enough to ask it? "


Edge 78 — November 20, 2000

(10,000 words)


THE THIRD CULTURE

IT'S A MUCH BIGGER THING THAN IT LOOKS
A Talk with David Deutsch

However useful the theory [of quantum computation] as such is today and however spectacular the practical applications may be in the distant future, the really important thing is the philosophical implications — epistemological and metaphysical — and the implications for theoretical physics itself. One of the most important implications from my point of view is one that we get before we even build the first qubit [quantum bit]. The very structure of the theory already forces upon us a view of physical reality as a multiverse. Whether you call this the multiverse or 'parallel universes' or 'parallel histories', or 'many histories', or 'many minds' — there are now half a dozen or more variants of this idea — what the theory of quantum computation does is force us to revise our explanatory theories of the world, to recognize that it is a much bigger thing than it looks. I'm trying to say this in a way that is independent of 'interpretation': it's a much bigger thing than it looks.

EdgeVideo — David Deutsch (5 min.)
DSL or better | Modem
Requires Real Player (free download)


HOW DEMOCRACY WORKS (OR WHY PERFECT ELECTIONS SHOULD ALL END IN TIES)
By W. Daniel Hillis


Many people believe that democracy works by giving voters a chance to elect a candidate whose views match their own. Actually, this isn't true. In a perfectly functioning democracy, both candidates will appear equally imperfect, elections' voter turnout will often be low, and all elections will end in near ties. The illustrations below show why this is true. They also show why a two-party system is better than a many-party system. Voters are more likely to like their choice of candidates in a many-party system, but they are less likely to like the winner of the election.


THE REALITY CLUB

• Jaron Lanier responds to Reality Club comments on his .5 Manifesto from George Dyson, Freeman Dyson, Cliff Barney, Bruce Sterling, Rodney Brooks, Henry Warwick, Kevin Kelly, Margaret Wertheim, John Baez, Lee Smolin, Stewart Brand, Rodney Brooks, Lee Smolin, Daniel C. Dennett, Philip W. Anderson; Lanier's postscript on Ray Kurtzweil

Gregory Benford on "Time Loops" by Paul Davies


THE THIRD CULTURE

IT'S A MUCH BIGGER THING THAN IT LOOKS
A Talk with David Deutsch


Introduction

In 1998 Oxford physicist David Deutsch was awarded the Paul Dirac Prize "For pioneering work in quantum computation leading to the concept of a quantum computer and for contributing to the understanding of how such devices might be constructed from quantum logic gates in quantum networks."

"Quantum computing," Deutsch says, "is information processing that depends for its action on some inherently quantum property, especially superposition. Typically we would superpose a vast number of different computations potentially more than there are atoms in the universe and then bring them together by quantum interference to get a result. Other quantum computations, notably quantum cryptography, couldn't be done by classical computers even in theory."

Deutsch's work on quantum computation has led him into two important areas of research concerning (a) "the structure of the multiverse making precise what we mean by such previously hand-waving terms as 'parallel', 'universes' and 'consists of'. It turns out that the structure of the multiverse is largely determined by the flow of quantum information within it, and I am applying the techniques we used in that paper to analyse that information flow"; and (b) "a generalization of the quantum theory of computation, to allow it to describe exotic types of information flow such as we expect to exist in black holes and at the quantum gravity level. This is all in the context of my growing conviction that the quantum theory of computation is quantum theory."

According to Deutsch, one spinoff from the quantum theory of computation is that "it provides the clearest and simplest language, and mathematical formalism, for setting out quantum theory itself."

– JB

DAVID DEUTSCH'S research in quantum physics has been influential and highly acclaimed. His papers on quantum computation laid the foundations for that field, breaking new ground in the theory of computation as well as physics, and have triggered an explosion of research efforts worldwide. His work has revealed the importance of quantum effects in the physics of time travel, and he is an authority on the theory of parallel universes.

Born in Haifa, Israel, David Deutsch was educated at Cambridge and Oxford universities. After several years at the University of Texas at Austin, he returned to Oxford, where he now lives and works. He is a member of the Centre for Quantum Computation at the Clarendon Laboratory, Oxford University. He is the author of The Fabric Of Reality.

See David Deutsch's Edge Bio Page


IT'S A MUCH BIGGER THING THAN IT LOOKS
A Talk with David Deutsch


EDGE: In what direction are you asking the most questions at the moment?

DEUTSCH: The direction of even deeper connections between physics and the theory of computation. We've got the quantum theory of computation — which, by the way, is THE theory of computation. As I always say, we have to regard the Turing theory (the traditional theory of computation) as being just the classical approximation to the real, quantum theory of computation. We already know of a few issues in theoretical physics (like the Maxwell Demon question, and the relationship of thermodynamics with statistics) which it is useful to regard as computational questions — questions about how information can or cannot be processed. What I am aiming for now is a new kind of theory, quantum constructor theory, which is the theory of what can be built, or more generally, the theory of what can be done, physically.

We build computers and skyscrapers and space ships, and we clone animals, and so on. At root you can regard all of these too as computations, because when you build a space ship and fly it to a different place, you get new information, or rather a different perspective on the same information, which is just what happens when you input information into a computer and look at the output. However, flying in a spaceship is not quite the same, even computationally speaking, as putting a camera on the space ship and letting it go somewhere, and watching, because, for instance, there's a time delay, so the machine gets harder to interact with if it's far away. Experience is inherently interactive, so there's a fundamental difference, imposed by the laws of physics, between the information processing you can do by going there vicariously using a robot and what you can do going there in person.

I've been thinking about those questions; that is, what sorts of computations do physical processes correspond to; which of these 'computations' can be arranged with what resources? And which sorts can't be arranged at all? What little we know about this new subject consists of a few broad limitations such as the finiteness of the speed of light. The theory of computability and complexity theory give us more detail on the quantum side. But a big technological question in my field at the moment is, can useful quantum computers actually be built? The basic laws of physics seem to permit them. We can design them in theory. We know what physical operations they would have to perform. But there is still room for doubt about whether one can build them out of actual atoms and make them work in a useful way. Some people are still pessimistic about that, but either way, that debate is not really a scientific one at the moment, because there is no scientific theory about what can and can't be built. Similar questions are raised by the whole range of nanotechnology that has been proposed in principle. So that's where a quantum constructor theory is needed.

EDGE: Why specifically a quantum constructor theory?

DEUTSCH: Because quantum theory is our basic theory of the physical world. All construction is quantum construction.

EDGE: What is distinctive about a quantum computer, compared to the computers we know today?

DEUTSCH: Quantum computing is information processing that depends for its action on some inherently quantum property, especially superposition. Typically we would superpose a vast number of different computations — potentially more than there are atoms in the universe — and then bring them together by quantum interference to get a result. Other quantum computations, notably quantum cryptography, couldn't be done by classical computers even in theory.

EDGE: What is the importance of this work?

DEUTSCH: Apart from quantum cryptography, it's unlikely to have practical applications in the near or medium-term future. It's theoretical. But as such it does give us some immediate benefits. One is the benefit of looking backwards. Let me give you a recent example from my own work.

Quantum mechanics, in the traditional formulation, seems to have a 'non local' character: that is, things you do HERE instantaneously affect things that happen THERE. It has been known from the beginning that this 'non locality' can't be used to send signals or anything. But still, philosophically, what are we to make of it? What sort of reality is quantum mechanics telling us we live in? And of course it's hard not to wonder: "well, if something gets there instantaneously, it is going faster than light. So in another reference frame it's travelling into the past. So it could create paradoxes; couldn't that solve the problem of consciousness, explain telepathy, summon up ghosts...?" — you name it. This non-locality idea is one of the things that's helped to fuel the appalling mysticism and double-talk that's grown up around quantum mechanics over the decades.

But once you understand that this is all about information processing, it becomes much easier to stop hand-waving and start calculating where the information actually goes in quantum phenomena. That's what Patrick Hayden and I did. The results (recently published in Proceedings of the Royal Society — see http://xxx.lanl.gov/abs/quant-ph/9906007) blow the 'quantum non-locality' misconception clean out of the water. Doing things HERE can only affect things THERE (visibly or invisibly) once the information about what you've done here has travelled there in some information-carrying physical object. Nothing instantaneous; nothing non local, nothing mystical.

EDGE: What about the famous experiments that demonstrate quantum non-locality in the lab?

DEUTSCH: They don't. They demonstrate quantum entanglement: one of the fundamental quantum phenomena, but a local one. It turns out that when it look as though there's a non-local effect — as in Bell-inequality experiments — what's really happening is that some of the information in quantum objects has become inaccessible to direct observation. And in our analysis we actually track how this information travels during entanglement phenomena. It never exceeds the speed of light, and it always interacts in a purely local way.

By the way, the presence of such not-directly-accessible information can be seen as the very thing that's responsible for the power of quantum computers. The insights we gained from that work are leading in other very promising directions too.

EDGE: Such as?

Well, I am currently working on two spin-offs of that paper. One is work on the structure of the multiverse — making precise what we mean by such previously hand-waving terms as 'parallel', 'universes' and 'consists of'. It turns out that the structure of the multiverse is largely determined by the flow of quantum information within it, and I am applying the techniques we used in that paper to analyse that information flow. The other is a generalization of the quantum theory of computation, to allow it to describe exotic types of information flow such as we expect to exist in black holes and at the quantum gravity level. This is all in the context of my growing conviction that the quantum theory of computation is quantum theory.

Speaking of that, another spinoff from the quantum theory of computation is that it provides the clearest and simplest language, and mathematical formalism, for setting out quantum theory itself. I'm planning a series of lectures on video which I think will be quite revolutionary. They will constitute a course in quantum theory for an audience that has no previous knowledge of it — say, university-entry level — all the way to leading-edge issues in quantum computation, in just twelve lectures (we're currently looking for a sponsor for them, by the way!).

I think that in future, quantum mechanics textbooks will use quantum computations as their introductory examples, rather than calculating the energy levels of the hydrogen atom and suchlike, which contain a high proportion of irrelevant stuff. Quantum computation gets down to basics, because quantum computation is the basics.

EDGE: But for you, the main application of the theory is to change our sense of the nature of reality?

DEUTSCH: Yes. However useful the theory as such is today and however spectacular the practical applications may be in the distant future, the really important thing is the philosophical implications — epistemological and metaphysical — and the implications for theoretical physics itself. One of the most important implications from my point of view is one that we get before we even build the first qubit [quantum bit]. The very structure of the theory already forces upon us a view of physical reality as a multiverse. Whether you call this the multiverse or 'parallel universes' or 'parallel histories', or 'many histories', or 'many minds' — there are now half a dozen or more variants of this idea — what the theory of quantum computation does is force us to revise our explanatory theories of the world, to recognize that it is a much bigger thing than it looks. I'm trying to say this in a way that is independent of 'interpretation': it's a much bigger thing than it looks.

EDGE: What do you mean by 'bigger'?

DEUTSCH: What I mean is — suppose we were to measure 'amounts' of reality, the sizes of things, in terms of the amount of information needed to describe them. To specify the positions of the atoms in this room, I need three numbers for each atom. The more atoms I want to describe, the more numbers I need. The more accurately I want to do it, the more decimal places I need to give. So that requires a certain amount of information. I can think of doing that for the whole universe. That may sound a lot of information, because there are 10^80-odd atoms in the known universe, not to mention the other degrees of freedom. So it may seem unimaginably vast. Yet it is minuscule compared to the amount of information that would be needed to specify the computational state of a single quantum computer, sitting on some future laboratory bench. So in terms of world view, or conceptual model, a quantum computer is a much bigger object than the whole of the classical universe. This fact forces quite a change in our world view.

EDGE: So the theory tells us that a quantum computer is in itself a universe.

DEUTSCH: It would be an object far more complex than the whole of the classical universe. The whole of physical reality is like that too, of course, and we sometimes call it the multiverse. We see, very roughly, a classical universe out there because most of the multiverse is not directly accessible. You can only infer the existence of hidden quantum information indirectly, as in the entanglement experiments I mentioned.

To many people this conclusion was already compelling even before quantum computers. The many-universes interpretation was proposed in 1957. But you can construe all the earlier arguments as being computational arguments too. The people making them didn't think of them as such, but that's what they were. They were saying: we look around us and we see something that's approximately a classical universe, and we might expect that if you take quantum mechanics into account, that might add a certain amount of extra 'stuff' — like relativity did — which behaves differently but there's still roughly the same 'amount' of reality as we thought there was. But that's not what happens when you take quantum mechanics into account. Reality becomes a vastly, exponentially bigger and more complex thing than it was under classical physics.

EDGE: How can we tell that there's so much of this 'hidden information' in a quantum system?

DEUTSCH: If the system is a quantum computer, we can tell because of the answers that it gives us. Take Grover's quantum search algorithm, for instance. It works like this: Let's say you're writing a chess program; you're searching through all the possible continuations from a given position. From one position there might be 20 possible moves, and from each of those there are 20 for the other player, and so on, so after N moves there are 20 to the power N different possible continuations. And you want to program the computer to search through all those continuations to evaluate a given move. Say you want it to search through a trillion continuations. It is a trivial theorem of classical computation, that if you want to search through a trillion unknown things, you generally have to do a trillion physical operations of some kind. You might be able to do some of them in parallel, but a given computer will only be able to do a fixed number at a time in parallel. One way or another you have to do a trillion things, so if you want to use the same computer to search through two trillion things it must take at least twice as long, and so on.

But with a quantum computer, you could do better: First of all, to search through a list of a trillion things you need only do a million operations. In general, in order to search through N possibilities one need only do the square root of N physical operations. And then, if you let your quantum chess machine think for twice as long, it will examine four times as many continuations. Three times as long, nine times as many, and so on. The explanation of this, in terms of many universes, is very simple. It's just that there are the square root of N universes collaborating on such a task. But again, never mind the question of interpretation as such. If we just think of what this computation implies for the reality we find ourselves in, again, the answer is that reality is much bigger than it looks. The winning move, when we find it, logically depends on all the positions we searched. So as a matter of logic, those positions must all have existed somewhere, and been compared with the answer we got.

EDGE: There seems to be a gap here, between abstract information on the one hand, and physical objects such as computers and stars and universes on the other. What's the connection?

DEUTSCH: Ultimately, information has got to have a physical realization; that's why it does come down to atoms, or stars, or whatever, in the end. But because of the universality of computation you don't have to think in terms of specific implementations. I don't have to know whether my information is going to be stored in magnetic disc, or whatever. I just know that more information means a bigger object.

EDGE: Where is work on quantum computation being done?

DEUTSCH: More and more places every day, it seems. In the US alone there are about a dozen very high quality research groups working flat out on quantum computation, theoretical and experimental. Probably another dozen in Europe. Also Japan, Australia, Israel...

EDGE: Let's talk about practical things. You're at a Microsoft, an Intel, a Sun Microsystems, and you read about David Deutsch and his theories about quantum computing. How will it impact your business? What measures would you take?

DEUTSCH: If computers are going to continue to become more powerful, processors and memory devices must become smaller. For that reason alone, quantum processes must be harnessed. Whether to make quantum computers or not doesn't really matter. Even to make classical computers out of atomic-scale components you'd have to use quantum physics and ultimately the quantum theory of computation. And once you're making those, the same technology could probably also make quantum computers. And the incentive would be there because of the various inherent advantages of quantum computation.

EDGE: How would you build one?

DEUTSCH: Proposed technologies for building them are at present competing. We don't know which way it's going to go. It could be ion traps or it could be quantum dots, or other solid state devices, or it could be superconducting loops. It could be molecules, or something we don't know about yet.

At present the biggest quantum computer in the world has about 3 qubits. Not much practical use, and it requires quite a large apparatus to make it work. Yet with three qubits you can already implement quantum algorithms that no classical computer using three bits could mimic.

Quantum cryptographic devices already exist in the laboratory. Eventually that's going to give perfectly secure communication. No longer will cryptography depend on the difficulty, or the intractability, of guessing an unknown key. It will simply be physically impossible to discover the key if you don't have the relevant physical object. So that is the ultimate in cryptography.

EDGE: We know that historically, advances in cryptography have been suppressed by governments. Could it be that quantum computers will never come on the market because people will make sure they don't?

DEUTSCH: If so, I know nothing about it. Both in Britain and America there are government agencies working on quantum cryptography, and as far as I can tell, they participate in much the same way as they would if they were academic institutions. Presumably they have their secrets — I hope they do! — but I'm not aware of them having tried to prevent any of these technologies from being developed, let alone theoretical advances. But I do find it a bit surprising, now that you come to mention it, that there isn't already a quantum cryptographic device on the market.

EDGE: For e-business?

DEUTSCH: No. The trouble is that at the moment quantum cryptography is severely limited in range. It can't be done through open air. It's got to be done through fiber-optic cable, and I think the world record is about 100 kilometers. But still, you could wire up the City of London, or central Washington DC, with absolutely secure communications. I don't know why that hasn't been done. I doubt that it has anything to do with sinister machinations by the government, though. It's probably just that it takes a long time for an idea to become genuinely commercially viable.

EDGE: What if there was a critical situation such as a war which required security?

DEUTSCH: In that case we already know how to build absolutely secure communications if we want to, at ranges of a few kilometers. Longer ranges would present a problem, but at least one group at Los Alamos is working on a system that would allow you to bounce quantum-encrypted messages off a satellite, and that would essentially solve the problem.

In the long run the problem could also be solved by quantum repeating stations. Unfortunately they would require much more sophisticated quantum computation than the raw cryptography does. They will come along eventually, perhaps in a decade or two.

Another thing that will come along — probably after more than a decade or two — is quantum cryptanalysis, where you would use a quantum computer to decrypt existing codes. Quantum decryption machines would render existing cryptographic systems obsolete.

EDGE: Ten years from now will I have any quantum-computer technology on my desk?

DEUTSCH: I guess not. But who knows? I've been surprised repeatedly by how well the experimentalists have been able to implement theoretical concepts in quantum computing. But apart from quantum cryptography I'd be amazed if anything technologically useful comes out in ten years, 20 years, even longer. But I've been amazed before.


THE THIRD CULTURE

HOW DEMOCRACY WORKS (OR WHY PERFECT ELECTIONS SHOULD ALL END IN TIES)
By W. Daniel Hillis

Introduction

Danny Hillis, physicist and computer scientist, brings together, in full circle, many of the ideas circulating among third culture thinkers: Marvin Minsky's society of mind; Christopher G. Langton's artificial life; Richard Dawkins' gene's-eye view; the plectics practiced at Santa Fe. Hillis developed the algorithms that made possible the massively parallel computer. He began in physics and then went into computer science — where he revolutionized the field — and he brought his algorithms to bear on the study of evolution. He sees the autocatalytic effect of fast computers, which lets us design better and faster computers faster, as analogous to the evolution of intelligence. At MIT in the late seventies, Hillis built his "connection machine," a computer that makes use of integrated circuits and, in its parallel operations, closely reflects the workings of the human mind. In 1983, he spun off a computer company called Thinking Machines, which set out to build the world's fastest supercomputer by utilizing parallel architecture.

The massively parallel computational model is critical to an understanding of today's revolution in human communication. Hillis's computers, which are fast enough to simulate the process of evolution itself, have shown that programs of random instructions can, by competing, produce new generations of programs — an approach that may well lead to the first machine that truly "thinks." Hillis's work demonstrates that when systems are not engineered but instead allowed to evolve — to build themselves — then the resultant whole is greater than the sum of its parts. Simple entities working together produce some complex thing that transcends them; the implications for biology, engineering, and physics are enormous.

— JB

W. DANIEL (DANNY) HILLIS, physicist and computer scientist, is co-chairman of the Board of Directors of The Long Now Foundation, and co-founder of Applied Minds, Inc. Hillis pioneered the concept of parallel computers that is now the basis for most supercomputers. He co-founded Thinking Machines Corp., which was the first company to build and market such systems successfully.

He was named the first Disney Fellow in 1996, and, and served until recently as vice president of research and development at The Walt Disney Company. He is also an Adjunct Professor of MIT at the Media Laboratory, and is the author of The Pattern On The Stone: The Simple Ideas That Make Computers Work (ScienceMasters Series).


HOW DEMOCRACY WORKS (OR WHY PERFECT ELECTIONS SHOULD ALL END IN TIES)
By W. Daniel Hillis

Many people believe that democracy works by giving voters a chance to elect a candidate whose views match their own. Actually, this isn't true. In a perfectly functioning democracy, both candidates will appear equally imperfect, elections' voter turnout will often be low, and all elections will end in near ties. The illustrations below show why this is true. They also show why a two-party system is better than a many-party system. Voters are more likely to like their choice of candidates in a many-party system, but they are less likely to like the winner of the election.

For the purpose of illustration, let's assume that any issue can be boiled down to a single choice of a point on the political spectrum, from left to right. Of course, real issues are more complicated than this, but the general principles of democracy can be illustrated with just this simple caricature.

Here is a simple, successful election. The graph shows how many voters are at each point on the political spectrum. It also shows the positions of the candidates. The Good candidate is the one whose opinions are closest to the will of the voters.  Voters choose the candidate that is closest to their own position, so the Good candidate wins.

The dividing line shows where the vote splits. Voters to the left of the line will vote for the Good candidate, voters to the right of the line will vote for the Bad candidate.

Of course I don't mean that Left is Good and Right is Bad.  The picture works the same way if we flip it around.

Either way, the Good candidate wins, and the election is successful. Some voters are unhappy, but even more voters would have been unhappy if the Bad candidate had won.

In some cases most of the voters may be unhappy with the results. This depends on the shape of the opinion curve. Here is one type of unpleasant outcomes:

In this case the voters' opinions are highly polarized, and the candidates are uncompromising. Almost half the population will be extremely unhappy with the result.

Here is a less unhappy variation:

In this case voters' opinions are highly polarized, but the candidates' positions represent a compromise. Almost all of the voters are relatively unhappy. As unpleasant as these outcomes may seem, they still represent successes of the democratic process. No other choice of leader would have led to a better result.

If we add a third candidate, the democratic process does not even necessarily produce the best result.

In this case the Spoiler candidate takes away enough votes from the Good candidate to allow the Bad candidate to win. This is very likely to happen if there are three parties.

In a many-party system, the voters are more likely to be happy with the choice of candidates, because they can find a candidate that is close to their own position. Unfortunately, the voters are less likely to be happy with the result of the election, because it will not necessarily choose the Best candidate. This situation is even worse when there are many viable candidates.

In a multiple party vote, each voter will be able to choose a candidate with opinions close to his or her own, but the candidate who gets elected will be the one that has the broadest constituency, not the one who best represents the will of the all the voters. Because the worst candidates pick up the outliers, it is relatively easy for a very bad candidate to win.

Let's go back to the case of only two parties.

If the candidates are willing to be flexible, then either candidate can gain votes by moving toward the Best Position. The Best Position is where an equal number of voters are to the left and to the right. A candidate in the Best Position is unbeatable. A candidate in the Best Position also does the best job of making the voters happy, or at least making them less unhappy than they would be otherwise.

If the candidates have some flexibility in their opinions and good information about what the voters want, they will move their own positions towards the Best Position, because it increases their chances of being elected. The closer one candidate moves towards the Best Position, the closer the other candidate will have to move to remain electable. With good pre-election polling, both candidates will be able to determine very accurately how much they need to move. If they are both willing to adjust their positions near the Best Position the outcome of the race will depend on the accuracy of the polling. If the polling is perfect, all elections will end in near ties.

This process of adjusting position in response to polling may seem to compromise the integrity of the candidate, but it does produce candidates whose opinion is very close to the Best Position. This may be regarded as a successful outcome, because a candidate in the Best Position also does the best job of making the voters happy.

Actually, a winner in the Best Position doesn't necessarily make many voters happy; it just makes them less unhappy than they would be with a different winner. In the previous illustrations, the best position was also the most popular position. This is not always the case.

In this final example, the voters are polarized and the Best Position is highly unpopular. Still, it represents the most electable position, and also the position that makes the fewest people very unhappy. This is the best result that any system can produce.

So in the end, two-party democracy is not necessarily good at giving voters a chance to elect a candidate that they like. If the polls are very accurate and the candidates are flexible, a successful election is likely to produce two candidates whom the voter will regard as equally imperfect. The election results will be very close.

For all its problems, the two-party democracy does a good job of producing and selecting candidates that represent an acceptable compromise between a wide spectrum of opinions. If the process is working well, then by the time of the election many voters may feel that they have very little real choice.  This may seem like a failure, but actually it is a sign of success. It means that the system has produced candidates that represent the most acceptable compromise of the conflicting opinions of the voters. If this process has worked perfectly, the results of the election will be a tie.  Judging from the recent results of the American presidential election, democracy is working well.


THE THIRD CULTURE

• Jaron Lanier responds to Reality Club comments on his .5 Manifesto from George Dyson, Freeman Dyson, Cliff Barney, Bruce Sterling, Rodney Brooks, Henry Warwick, Kevin Kelly, Margaret Wertheim, John Baez, Lee Smolin, Stewart Brand, Rodney Brooks, Lee Smolin, Daniel C. Dennett, Philip W. Anderson; Lanier's postscript on Ray Kurtzweil

Gregory Benford on "Time Loops" by Paul Davies


Jaron Lanier
Date: November 11, 2000

Hello to two generations of Dysons, Freeman and George, both of whom I admire. I must say that it is immediately apparent that our priorities are different. As I hope my essay makes clear, I am more concerned with how people design technology and relate to it psychologically than with the long term fate of the machines themselves. Whether or not George Dysons' critique is technically correct, in my opinion it is esthetically, ethically, and politically misguided, in that he is looking at questions solely from the perspective of the machines rather than from the perspective of people. I see that I have genuinely failed to communicate this most essential point in my essay across a cultural chasm, and it saddens me. My failure is made more plain by the flip theological references in the George Dyson's note; he is apparently more comfortable deifying software than in recognizing the value of human aspirations to rational design.

If a future develops in which Dyson would perceive new life forms to have arisen from adaptations of messy software, I would perceive instead a lot of anti-human programming and design resulting in opaque user interfaces, i.e. machines that no longer made sense to people. I would also perceive a loss of human drive to achieve elegance in software design and an abandonment of rational planning. The most important point in my essay is that our two differing interpretations would each be reasonably applicable to the same outcome. I am advocating one interpretation over the other for reasons that arise from human, rather than technical concerns.

The argument that the Dysons do address is a secondary one in my mind; to what degree messiness limits or enhances the future of software. The key question here is whether different kinds of unreliability are effectively interchangeabled. George Dyson equates the failure modes of primordial chemistry with failure modes seen in contemporary software. This shouldn't be understood as a comparison between hardware and software per se, but between elements whose connections can only be described by statistics, like molecules, or indeed physical gates in a computer, versus elements that connect by Platonic logic.

Certainly the Dysons are correct to a degree, in the sense that error recovery algorithms can grant a "soft knee" to software failure modes that is reminiscent of the type of "statistical binding" seen in natural systems. Real computers as we know them are not built this way, of course. A thought experiment is different from a real-world viable machine.

In George Dyson's original posting, he said, "It is that primordial soup of archaic subroutines ... that is driving the push towards the sort of fault embracing template-based addressing that proved so successful in molecular biology".

If the question is framed in the future tense, then I understand what conversation we are having. (We're asking if evolving machines could hypothetically come to be in the future, perhaps the very far future.) I think this idea can be examined, and as I hope I made clear, I am open minded about it, although I maintain that an excessive emphasis on this possibility has negative effects on contemporary technology design and culture.

In more recent correspondence, George said quite plainly that, with regard to gaining autonomy through evolution, machines, "have done so *already*".

This I truly cannot accept. If people stopped maintaining today's machines they would not only cease to change, they would cease to operate entirely. I'm sure George must agree with that- that evolution based on small variations (mutations) allowed by error correction is not a possibility in machines as they exist today. So George must be talking about a system made of people and computers together. And here, certainly, I think we must agree that there is room for alternate interpretations- that one person's autonomous machine might equally well be another's machine with an inscrutable user interface. If we can agree on this chain of reasoning, then I would hope to discuss whether there are pragmatic reasons to favor one interpretation over the other in specific circumstances, such as our own.

In correspondence, George suggested that we should start to think of the internet as already being somewhat autonomous, since it runs even though people don't fully understand it anymore. (I hope I'm doing justice in my paraphrasing.)

My experience of current digital tools is that while there are certainly numerous instances in which people no longer understand the tools, it is also true that these are precisely the same instances in which the tools fail- in which they crash. The changes that result from a human observing a crash are usually not incremental mutations, searching a space blindly for better configuration, but rather analysis-driven adjustments that force the machine to conform to a rational plan that was written down prior to testing. The plan might change, of course, but only on the human side of the system. I am not claiming that this is always the way that debugging happens (in fact I love to make little virtual worlds with quirky bugs I don't quite understand), but I am claiming that it is more true the larger a system gets.

The fact that Y2K bugs didn't destroy the world as feared is one piece of evidence that we are actually in charge of our machines, even though we like to fantasize that we aren't.

The examples I gave of people "making themselves stupid" in order to make software seem smart, as in the credit rating system, are ones in which people most definitely do understand the machines, to a fault.

The Internet as a machine seems comprehensible to me. At Advanced Network and Services, where the Internet 2 Engineering Office is located, and which is my primary perch these days, there's a fine project to measure activity on the net with probes all over the world, and the data are useful for rationally improving performance. No alien communication signals have appeared.

The failure modes of practical software are quite different from what is seen in chemical/biological systems. When a computer crashes (and I mean a real computer, not a thought experiment in a math journal), nothing else happens. The is no more processing. When an organism crashes, it turns into food for other organisms. Its information is not entirely lost from the system. I recognize that this point will probably fall on deaf ears to respondents who think of computers as already being autonomous and biological in some sense. I think a careful examination of computers as they are in the real world will show that all the "biological" properties of digital technology are brought to the table by the people who maintain the technology.

I don't think we know enough yet to say definitively whether the two kinds of unreliability (digital and biological/statistical) are ultimately, at some extreme of scaling, interchangeable.

I also don't perceive the evolution that George does in some of the examples he suggested in correspondence. In what ways have operating systems gotten better since the 70s? There are a few, but far fewer than anyone in the field ever imagined there would be. UNIX was, to a remarkable degree in retrospect, pretty much there at the start. I suppose it comes down to a subjective evaluation of how important various modifications since then have been.

The internet might provide better examples of the kinds of ongoing "evolution" George is talking about. There are still opportunities to create useful new subsystems, along the lines of the one operated by Akamai, for example. As another example, the TCP/IP protocol is probably the most common "soft failure mode" protocol in use, and it has improved over time, most notably with the advent of "slow start". But this happened when a human, Van Jacobson, had one of those thus far inscrutable "aha!" moments.

Ironically, I have for a long time nurtured a scheme to build an operating system out of components that would bind together using a pattern recognition approach (with so-called "neural nets") instead of literal reference, as part of my own war against "brittleness". Such a system, if I could ever get it to work, and I've tried, believe me, would be more in line with the Dysons' take on software than other architectures I am aware of out there in the real world today. (One sub-project of the Tele immersion Initiative, bearing the acronym SOFT, which has been created in the last two years at the Computer Science Department of Brown University, could perhaps be seen as an early example of a "soft binding" architecture.)

To Cliff Barney:

Hey, I'm thinking as socially as I can. Wish it were social enough for you!

I gave the closing talk at Stanford University's Englebart event that you mention. I presented a condensed version of the "missing half" of the manifesto there, and it's available on video (see http://unrev.stanford.edu/index.html). My preternaturally angelic and patient publishers are confident that I will somehow, someday soon finish the long overdue book that will unite both halfs.

Human society didn't change all THAT much during the course of the million-fold increase in computer power that you identify, from 1968 to roughly the end of the century. Certainly society changed more (as a result of technological provocation) in the previous 30 years, which saw the introductions of television, the birth control pill, factory-based genocide, the atomic bomb, LSD, the electric guitar, suburbia, the freeway, the middle class, and so much more. Globalism isn't all that new either. You can read passages in Marx on the internationalization of capital that sound exactly like dot com press releases from the recent boom years.

The last thirty years have seen such things as the rise of Gay rights and working moms, but it seems to me that many of these changes are most easily interpreted as extensions of processes that began before 1968. (As an example, I'm amazed that so much of today's teenage culture is as similar as it is to that of the 1950s and 1960s. The (white) music even sounds about the same as it did in the 1960s. The music of 1968 sounded quite different from the music of 1938.)

People talk about digital technology more than they use it. They tend to overstate how much they have been effected by it. I don't say this as a criticism. It's a most fascinating thing to talk about. Here I am doing it.

I think what's going on is that digital technology does not effect the lives of people until new culture, expressed both in software implementations and in changing human habits, is invented for it. Non-digital technologies, on the other hand, present instant opportunities for meaningful events to take place. Point a movie camera at the world and that world is changed forever, even if an initial subject is nothing more than an approaching train. Digital technology is different because an intensely time consuming process must precede its efficacy. An excessive degree of conscious forethought (thwarting pretensions to Dyonesian digital flights of fancy) and cumulative boredom characterize digital culture more than surprising revelation. The tedium gets to us all once in a while, and I think intellectual positions such as George Dyson's might serve as psychic comfort.

I am a true believer in the long term, lovely improvement of the human condition to be brought about by digital technology, but it's going to be a slow ride, because we have to build the code, piece by piece.

To Bruce Sterling:

A warm, brotherly bear hug for you!

To Rodney Brooks:

Your way of thinking is all too familiar, the standard issue point of view found in elite computer science departments. Glad you showed up, just in case anyone might have wondered if I was making up a straw man.

I made no claim as to whether machines could in theory become conscious or not. Instead I argued that such ultimate questions are not answerable, at least by anyone in our contemporary conversation.

I maintain, once again, that the most useful conversations we can have on such topics must be motivated by pragmatic, esthetic, and moral considerations.

Your certainty that you alone can identify the one true null hypothesis is a religious claim.

I hope it's clear that I was being snide and flip when I brought up nanobots. They are actors in a thought experiment, no more meaningful than artificial intelligence, and no more useful in thinking about how to design real machines, societies, and philosophies.

To Henry Warwick:

I'd like to address a plea to you and to other people who largely agree with me. Would you consider becoming immersed for a time in the other side's arguments, if only for the sake of dialog? They aren't stupid ideas, they're just wrong, and they deserve respect as smart, wrong ideas. If we humanists aren't willing to engage the CT crowd on their own terms once in a while, we can hardly expect them invest in understanding our terms.

I'd also suggest decoupling such questions as whether the universe is deeply "mathematical", or whether it can be fully understood, from the design, legal, esthetic, and social levels where the ideas that root in the heads of technologists come to matter. The deep questions might never be answered. They must be asked, of course, but it is best to ask them separately. The pragmatic questions can not only be answered, but will be answered by our collective actions, whether we like it or not.

To Kevin Kelly:

I wrote the essay for my colleagues in the technology world, such as Rodney Brooks. Whether any of them are persuaded by it remains to be seen. My sense of this world is that it is currently not benefiting from a variagated ecology of metaphors, but rather is locked into a standard release of one metaphor.

To Margaret Wertheim:

I agree. Once Western culture defined itself as being on a ramp, the ramp had to go somewhere. The "other half" of the manifesto will be concerned with alternate ways of conceiving of the ramp's destination.

To John Baez:

Thank you for pointing out that a lot of folks in the "extropian" crowd seem to actually like the idea of goo taking over. I have come across this sentiment again and again. It is interesting in its own right, completely aside from whether Genghis Goo is a realistic scenario or not.

To Lee Smolin:

Thank you for this fascinating post.

I wish Stuart Kauffman would name his objects something other than "autonomous agents", since that is almost the same language CTers use to describe such things as the idiotic dancing paper clip that confuses users of Windows.

I'd like to encourage other respondents to address your ideas directly, instead of dragging the conversation down once again into eternal imponderables.

Some of the next deep (askable) questions: Will we someday be able to estimate how efficient natural evolution has been, in comparison to a theoretical ideal? Is evolution close to being as fast as it could be in searching the configuration spaces at hand, in the way that retinas are almost as sensitive to visible light as they could possibly be, or is there a lot of room for making evolutionary machines that would search practical configuration spaces much more quickly?

I'm also struck by how much more past computation is implied in some configurations than in others, and therefore wonder how your ontology relates to the various definitions of "information". Irreducible overhead in optimizing a configuration space (including legacy effects) might also be treated as a fundamental "distance" between configurations, and might serve as a basis for formal definitions of such things as species boundaries. This type of distance is also similar to some ideas about physical distance in recent computation quantum gravity models.

To Stewart Brand:

Yes, yes yes! This is the explanation for the preponderance of exceedingly strident expressions of libertarian ideals in digital culture.

To Daniel Dennett:

You'll be happy to know I turned down Harpers Magazine and instead accepted Wired's offer to print the .5 Manifesto. I assure you I am in no danger of drowning in a friendly tsunami of Euro-admirers, for the simple reason that I am also a composer, and therefore the class of professional culture critics is sworn by blood oath to make my life difficult.

I'd like to be able to assert that neither of us understand something without being accused by CTers of sentimental, softheaded, retrograde religious dependency. I made no claim that there could never be an explanation for how people think, just that Darwin alone might not provide the framework for an explanation. No "half a skyhook", just an unsolved problem.

Straw men?

Read Rodney Brooks' posts and you'll see what I'm up against.

The rape book is silly, you just have to admit it. I could have quoted from dozens of clunkers in this odd text. There was a great passage about a woman raped by an orangutan who's husband (the woman's husband, that is) as well as she herself reported less consternation than they would have expected to experience if she had been raped by a person. No control group, sample size of one, reliance on subjective reportage, suspicious story; you could hardly come up with a more lousy experiment. And yet this example was used to reinforce the idea that the real reason rape is disliked is selfish genes; that bestiality is relatively delightful because it doesn't interrupt human mating schemes. I'm not saying, and have never said, that the ideas in this book are completely or exactly wrong, but rather that the book is inept. I sympathize with your position. You're a little like a member of a political party who has to defend an incompetent candidate. The important question to ask here is whether the CT community is too self-satisfied. I haven't met the authors of the rape book, but I imagine they must be intelligent and well meaning, and that perhaps the giddy team spirit of CT blinded them and made them sloppy.

I didn't attack Dawkins in the piece, and in fact a genial debate between he and I has been published. He is, as I have pointed out in past writings, not a meme totalist, even though he spawned a generation of them. As for you on consciousness, I am gently teasing you, and you must admit that you have been quite a rough player in your own writings in the past.

To Philip W. Anderson:

Thank you for your provocative note.

An interesting thought experiment is to imagine what the history of science and civilization might have been like if digital computers had become practical before Newton. This is not an unimaginable sequence. The ancient Alexandrians or Chinese might have done it if fortune had granted either of them a millennium or so of tranquility. The Chinese scenario might be more likely, since they weren't thinking in terms of mathematical proof, but were very good at coming up with clever technologies and building massive works. They would perhaps have built stylish city block-sized medieval computers out of electromechanical switches. These would have emitted marvelous rhythms, and perhaps there would have been dancing on the sidewalks around them.

I suspect our counterfactual predecessors could have gotten to the moon, but not built semi-conductors or an atomic bomb. They wouldn't have been forced to notice the problems that lead us to understand relativity and quantum mechanics.

I think there would have been less of a divide been the sciences and the mainstream of society, because it is easier to write fresh and fun computer programs than it is to do original work in continuous mathematics. Instead of being shrouded in esoteric mystery, science and engineering would have seemed more accessible to the lay person. Kant or his equivalent would have built huge simulations of competing metaphysics instead of seeking proofs.

Back to the present: Computers might yet yield important new physics. Stephen Hawking simply made the usual error of underestimating the time it takes to figure out how to write good software. We shouldn't expect deep understanding of software to improve any faster than deep understanding of other things. Think of the time it took to move from Newton to Einstein. Intellectual progress is not governed by Moore's Law.


Postscript

Re: Ray Kurtzweil

Much to my surprise, Ray Kurtzweil and I spoke in succession (in Atlanta, at one of Vanguard's events) just as I was writing these responses. We see the world quite differently. He would certainly reject my last claim above, that fundamental intellectual achievement isn't inexorably speeding up.

I see punctuated equilibria in the history of science. Right now we're in the midst of an explosion of new biology. Around the turn of the last century there was an explosion of data and insight about physics. Physics is now searching for its next explosion but hasn't found it yet.

I also see a distinction between quantity and quality that Ray doesn't. I see computers getting bigger and faster, but it doesn't directly follow that computer science is also improving exponentially.

Ray sees everything as speeding up, including the speed of the speedup. In Atlanta, he collected varied graphic portrayals of exponential historical processes in a slide show, and labeled these a "countdown" to the singularity he predicts will arrive about a quarter of the way into the new century.

His exponential histories blend what others might think of as varied phenomena together into categories without differentiation. For instance, he showed a slide about Moore's Law, but with the timeframe not limited to the era of the silicon chip. Instead, he defines chips as just one of five technological phases that have upheld the exponential speedup of computation that started with the earliest mechanical calculation devices. He infers that the curve will be continued with nanotechnological or other devices once the limits of chip technology are reached, in perhaps twelve years. Likewise he showed a grand exponential account of the history of life on Earth that started with items like the Cambrian Explosion at the foot of the curve and soared to modern technological marvels at its heights, as if these were all of a kind.

I hope I can avoid being cast as the person who precisely disagrees with Ray, since I think we agree on many things. There are exponential phenomena at work, of course, but I feel they have robust contrarian company. I believe our human story is not best defined by a smooth curve, even at a large scale (although I try to make one exception, which I'll describe below). If there was ever a complex, chaotic phenomenon, we are it.

One question I have about Ray's exponential theory of history is whether he is stacking the deck by choosing points that fit the curves he wants to find. A technological pessimist could demonstrate a slow-down in space exploration, for instance, by starting with sputnik, and then proceeding to the Apollo and the space shuttle programs and then to the recent bad luck with Mars missions. Projecting this curve into the future could serve as a basis for arguing that space exploration will inexorably wind down. I've actually heard such reasoning put forward by antagonists of NASA's budget. I don't think it's a meaningful extrapolation, but it's essentially similar to Ray's arguments for technological hyper-optimism.

It's also possible that evolutionary processes might display local exponential features at only some scales. Evolution might be a grand scale "configuration space search" that periodically exhibits exponential growth as it finds an insulated cul-de-sac of the space that can be quickly explored. These are regions of the configuration space where the vanguard of evolutionary mutation experimentation comes upon a limited theater within which it can play out exponential games like arms races and population explosions. I suspect you can always find exponential sub processes in the history of evolution, but they don't give form to the biggest picture.

Here's one example: The dinosaurs were apparently "scaled" (maybe in both the traditional and Silicon Valley senses of the word!) by an "arms race", leading to larger and larger animals. Dinosaurs were not the only creatures at the time that relied on gigantism as a strategy. Much of the animal kingdom was becoming huger at once. I doubt the size competition proceeded at a linear rate. Arms races rarely do.

If we were dinosaurs debating this question, the Kurtzweilosaurus might argue that our descendants would soon be big enough to stand on their toes and touch the moon, and not long after that become as big as the universe. (Tribute is due, as always, to Mark Twain and his erectile Mississippi.)

The race to bigness came to a halt, perhaps because of a spaceborne cataclysm. Whatever the reason for the dinosaurs' disappearance, they could not have become bigger without bounds. Furthermore, the race to bigness did not inexorably reappear, but was replaced by other races. The mere appearance of an exponential sequence does not mean that it will not encounter an impassable boundary, or become untraceable as other processes exert their influences.

I see a scattered distribution of local, bounded exponential processes in the history of life, while Ray sees these processes all focusing like a coherent laser on a point in time we will likely live to see.

Smart people can be fooled by trends. For instance, in 1666, when technological optimism was perhaps even more pronounced than it is today (when space exploration seemed to be progressing exponentially, for instance), Time Magazine presented what it thought was a sober prediction: That by the year 2000 technology would have advanced to the point that no one in America would work for a living. Automation would take the drudgery out of life. Each American citizen would receive a healthy middle class stipend in the mail every month simply for being American. A specific dollar amount ($30-$40,000 in 1966 dollars) was even projected for the stipend. (Thanks to GBN's Eamonn Kelly for pointing out this example.)

Time Magazine was making what it saw as a perfectly reasonable extrapolation based on legitimate data. What went wrong with Time's prediction? There's no doubt that technology continued to improve in the second half of the twentieth century, and by most interpretations it did so at an exponential clip. Productivity faithfully increased on an exponential curve as well.

Here are a few candidate failings: Public rejection of key predicted technologies such as nuclear energy; "lock in" of such things as cars and freeways, which did not scale cheaply or elegantly; population explosions; increasingly unequal distributions of wealth; entrenchment in law and habit of the work ethic; and perhaps even the beginning of the "planet of helpdesks" scenario that made a cameo appearance in the .5 manifesto. This last possibility provides an alternate way to think about the growing "knowledge economy".

Note that some of these countervailing elements are exponential in their own right. Population growth is a classic example of an exponential process that can absorb an exponential increase in available resources. This is what has happened with high yield agriculture in India.

What's really tricky is figuring out when one process will outrun its surroundings for a while in a meaningful way, as the Internet has grown at a faster rate than the population or the larger economy.

I have to admit that I want to believe in one particular large scale, smooth, ascending curve as a governor of mankind's history. Specifically, I want to believe that moral progress has been real, and continues today. This is not an easy thing to believe in. I formed my desire to believe in it at about the same that Time Magazine made it's prediction about the end of work.

I remember being a child in the 1960s, and there was a giddy feeling in the air of accelerating social change. While the language was different, the idea wasn't that different from today's digital eschatology. It felt like the world was on an exponential course of change, approaching a singularity.

The evidence was there. You could have plotted the points on a graph and seen one of Ray's curves, but no one thought to do it explicitly at the time. 1776, Civil War, Women's Suffrage, Civil Rights Struggle, Anti-war movement, Women's lib, Gay Rights, Animal rightsS You could plot all these on a graph and see an exponential rate of expansion of the "Circle of Empathy" I wrote about in the .5 Manifesto. This process seemed to be destined to zoom into a singularity around 1969 or so, when I was nine years old. People were quite depressed when the singularity did not happen. Younger people today might not realize how deeply that singularity's no-show marked the lives of a vast number of Baby Boomers.

Dinosaurs did not become as large as the universe, work did not disappear in 2000 (at least not by November, 2000, as I write this), and love did not conquer all in 1969. All the trends were real, but were either interrupted, outran their own internal logics, ran out of world to expand into, or were balanced or consumed by other processes.


John Brockman, Editor and Publisher

Copyright ©2000 by Edge Foundation, Inc.