1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |10



2005

"What Do You Believe Is True Even Though You Cannot Prove It?"


Printer-friendly version

CONTRIBUTORS

Alun Anderson

Chris W. Anderson

Philip W. Anderson

Scott Atran

Simon Baron-Cohen

John Barrow

Gregory Benford

Jesse Bering

Susan Blackmore

Ned Block

Paul Bloom

David Buss

William Calvin

Leo Chalupa

Mihaly Csikszentmihalyi

Paul Davies

Richard Dawkins

Stanislas Deheane

Daniel C. Dennett

Keith Devlin

Jared Diamond

Denis Dutton

Esther Dyson

Freeman Dyson

George Dyson

Jeffrey Epstein

Todd Feinberg

Christine Finn

Kenneth Ford

Howard Gardner

David Gelernter

Neil Gershenfeld

Steve Giddings

Daniel Gilbert

Rebecca Goldstein

Daniel Goleman

Brian Goodwin

Alison Gopnik

Jonathan Haidt

Haim Harari

Judith Rich Harris

Sam Harris

Marc D. Hauser

Marti Hearst

W. Daniel Hillis

Donald Hoffman

John Horgan

Verena Huber-Dyson

Nicholas Humphrey

Piet Hut

Stuart Kauffman

Alan Kay

Kevin Kelly

Stephen Kosslyn

Kai Krause

Lawrence Krauss

Ray Kurzweil

Jaron Lanier

Leon Lederman

Janna Levin

Joseph LeDoux

Seth Lloyd

Benoit Mandelbrot

Gary Marcus

Lynn Margulis

John McCarthy

Pamela McCorduck

Ian McEwan

John McWhorter

Thomas Metzinger

Oliver Morton

David Myers

Randolph Nesse

Tor Nørretranders

Martin Nowak

James O'Donnell

Alex Pentland

Irene Pepperberg

Stephen Petranek

Clifford Pickover

Steven Pinker

Jordan Pollack

Carolyn Porco

Robert R. Provine

Martin Rees

Howard Rheingold

Carlo Rovelli

Rudy Rucker

Douglas Rushkoff

Karl Sabbagh

Robert Sapolsky

Roger Schank

Jean Paul Schmetz

Stephen H. Schneider

Gino Segre

Martin E. P. Seligman

Terrence Sejnowski

Rupert Sheldrake

Michael Shermer

Charles Simonyi

John R. Skoyles

Lee Smolin

Elizabeth Spelke

Maria Spiropulu

Tom Standage

Paul Steinhardt

Bruce Sterling

Leonard Susskind

Nassim Taleb

Timothy Taylor

Arnold Trehub

Robert Trivers

J. Craig Venter

Alexander Vilenkin

Margaret Wertheim

Donald I. Williamson

Ian Wilmut

Ellen Winner

Anton Zeilinger

 

JORDAN POLLACK
Computer Scientist, Brandeis University

I believe that that systems of self-interested agents can make progress on their own without centralized supervision.

There is an isomorphism between evolution, economics, and education. In economics, the supervisor is a central government or super rich investor, in evolution, it is the "intelligent designer", and in education, its the teacher or outside examiners. In economic systems, despite an almost religious belief in Laissez-Faire and incentive-based behavior, economic systems are prone to winner-take-all phenomena and boom-bust cycles. They seem to require benevolent regulation, or "managed competition" to prevent the "rich get richer" dynamic leading to monopoly, which leads inevitably to corruption and kleptocracy. In evolution, scientists reject the intelligent designer as a creationist ruse, but so far our working models for open-ended evolution haven't worked, and prematurely convergence to mediocrity. In education, evidence of auto-didactic learning in video-games and sports is suppressed in academics by top-down curriculum frameworks and centralized high-stakes testing.

If we did have a working mechanism design which could achieve continuous progress by decentralized self-interested agents, it would settle the creationist objection as well as apply to the other fields, leading to a new renaissance.


DAVID GELERNTER
Computer Scientist, Yale University; Chief Scientist, Mirror Worlds Technologies; Author, Drawing Life

I believe (I know—but can't prove!) that scientists will soon understand the physiological basis of the "cognitive spectrum," from the bright violet of tightly-focused analytic thought all the way down to the long, slow red of low-focus sleep thought—also known as "dreaming." Once they understand the spectrum, they'll know how to treat insomnia, will understand analogy-discovery (and therefore creativity), and the role of emotion in thought—and will understand that thought takes place not only when you solve a math problem but when you look out the window and let your mind wander. Computer scientists will finally understand the missing mystery ingredient that made all their efforts to simulate human thought such naive, static failures, and turned this once-thriving research field into a ghost town. (Their failures were "static" insofar as people think in different ways at different times—your energetic, wide-awake mind works very differently from your tired, soon-to-be-sleeping mind; but artificial intelligence programs always "thought" in the same way all the time.)

And scientists will understand why we can't force ourselves to fall asleep or to "be creative"—and how those two facts are related. They'll understand why so many people report being most creative while driving, shaving or doing some other activity that keeps the mind's foreground occupied and lets it approach open problems in a "low focus" way. In short, they'll understand the mind as an integrated dynamic process that changes over a day and a lifetime, but is characterized always by one continuous spectrum.

Here's what we know about the cognitive spectrum: every human being traces out some version of the spectrum every day. You're most capable of analysis when you are most awake. As you grow less wide-awake, your thinking grows more concrete. As you start to fall asleep, you begin to free associate. (Cognitive psychologists have known for years that you begin to dream before you fall asleep.) We know also that to grow up intellectually means to trace out the cognitive spectrum in reverse: infants and children think concretely; as they grow up, they're increasingly capable of analysis. (Not incidentally, newborns spend nearly all their time asleep.)

Here's what we suspect about the cognitive spectrum: as you move down-spectrum, as your thinking grows less analytic and more concrete and finally bottoms on the wholly non-logical, highly concrete type of thought we call dreaming, emotions function increasingly as the "glue" of thought. I can't prove (but I believe) that "emotion coding" explains the problem of analogy. Scientists and philosophers have knocked their head against this particular brick wall for years: how can people say "a brick wall and a hard problem seem wholly different yet I can draw an analogy between them?" If we knew that, we'd understand the essence of creativity. The answer is: we are able to draw an analogy between two seemingly unlike things because the two are associated in our minds with the same emotion. And that emotion acts as a connecting bridge between them. Each memory comes with a characteristic emotion; similar emotions allow us to connect two otherwise-unlike memories. An emotion (NB!) isn't the crude, simple thing we make it out to be in speaking or writing—"happy," "sad," etc.; an emotion can be the delicate, complex, nuanced, inexpressible feeling you get on the first warm day in spring.

And here's what we don't know: what's the physiological mechanism of the cognitive spectrum? What's the genetic basis? Within a generation, we'll have the answers.


JOHN HORGAN
Science Writer; Author, Rational Mysticism

I believe neuroscientists will never have enough understanding of the neural code, the secret language of the brain, to read peoples' thoughts without their consent.

The neural code is the software, algorithm, or set of rules whereby the brain transforms raw sensory data into perceptions, memories, decisions, meanings. A complete solution to the neural code could, in principle, allow scientists to monitor and manipulate minds with exquisite precision; you might, for example, probe the mind of a suspected terrorist for memories of past attacks or plans for future ones. The problem is, although all brains operate according to certain general principles, each person's neural code is to a certain extent idiosyncratic, shaped by his or her unique life history.

The neural pattern that underpins my concept of "George Bush" or "Heathrow Airport" or "surface-to-air missile" differs from yours. The only way to know how my brain encodes this kind of specific information would be to monitor its activity—ideally with thousands or even millions of implanted electrodes, which can detect the chatter of individual neurons—while I tell you as precisely as possible what I am thinking. But data you glean from studying me will be of no use for interpreting the signals of any other person. For ill or good, our minds will always remain hidden to some extent from Big Brother.


JOHN R. SKOYLES
Neuroscience researcher; Coauthor, Up From Dragons

Here's what I believe but cannot prove: human beings, like all animals, have evolved a range of capacities for fighting disease and recovering from injury, including a variety of 'sickness behaviors'; humans beings alone however have discovered the advantages of off-loading much of the responsibility for managing their sickness behaviors to other people; the result is that for human beings the very nature of illness has changed—human illness is now largely a social phenomenon.

This is possible because "illness" is a response. A rise in body temperature, for example, kills many bacteria and changes the membrane properties of cells so viruses cannot replicate. The pain of a broken bone or weak heart makes sure we let it heal or rest. Nature supplied our bodies in this way with a first-aid kit but unfortunately like many medicines their "treatments" are unpleasant. That unpleasantness, not the dysfunction which they seek to remedy is what we call "illness".

These remedies, however, have costs as well as benefits making it often difficult for the body to know whether to deploy them. A fever might fight an infection but if the body lacks sufficient energy stores, the fever might kill. The body therefore must make a decision whether the gain of clearing the infection merits the risk. Complicating that decision is that the body is blind, for example, to whether it faces a mild or a life-threatening virus. The body thus deploys its treatments in a precautionary manner. If only one in ten fevers actually clears an infection that would kill, it makes sense to tolerate the cost of the other nine. Most of the body's capacities for fighting disease and repairing injury are deployed in this precautionary way. We feel pain in a broken limb so we treat it over protectively—in nine occasions out of ten we could get by with less protective pain but on the tenth it stops us causing it further injury. But precautionary deployment is costly. Evolution therefore has put the evaluation of such deployment under the control of the brain in attempt to keep their use to a minimum.

But the brain on its own often lacks the experience to know our own condition. Fortunately, other people can, particularly those that have studied health and illness.

Human evolution therefore changed illness by offloading decisions about deployment whenever possible on to professionals. People that make themselves experienced in disease and injury, after all, have the background knowledge to know our bodies much better than ourselves. Healing professionals—healers, shamans, witch doctors and medics—exist in all human cultures. Of course, such professionals were seen by their patients as offering real treatments—and a few did help such as advising rest, eating well and some medicinal herbs. But most of what they did was ineffective. Doctors indeed had to wait until 1908 and Paul Ehrlich's discovery of Salvarsan for treating syphilis before they had a really effective treatment for a major disease. Nonetheless earlier doctors and healers were considered by themselves and their patients to be in the possession of very powerful cures.

Why? The answer I believe was that their ineffective rituals and potions actually worked. Evolution prepared us to offload control of our abilities to fight disease and heal injuries to those that knew more than us. The rituals and quackery of healers might have not worked but they certainly made a patient feel they were in the hands of an expert. That gave a healer great power over their patient. As noted, many of the body's own "treatments" are used on a precautionary basis so they can be stopped without harm. A healer could do this by applying an impressive "cure" that persuaded the body that its own "treatments" were no longer needed. The body would trust its healer and halt its own efforts and so the "illness". The patient as a result would feel much better, if not cured. Human evolution therefore made doctoring more than just a science and a question of prescribing the right treatment. It made it also an art by which a doctor persuades the patient's body to offload its decision making onto them.


THOMAS METZINGER
Johannes Gutenberg-Universität Mainz; Author, Being No One

I believe, but cannot prove, that a First Breakthrough on Consciousness is actually around the corner. "Actually around the corner" means: less than 50 years away. My intuition is that, roughly, all we need for this first breakthrough are four convincing stories.

The first story will be about global integration, about the dynamical self-organization of long-range binding operations in the human brain. It will probably involve something like synchrony in multiple frequency bands, and will let us understand how a unified model of the world can emerge in our own heads.

The second story will be about "transparency": Why is it that we are unable to consciously experience most of the images our brain generates as images? The answer to this question will give us a real world. The transparency-tale has to do with not being able to see earlier processing stages and becoming a naive realist.

The third story will focus on the Now, the emergence of a psychological moment—on a deeper understanding of what William James' called the "specious present". Experts on short term memory and neural network modelers with tell this story for us. As it unfolds, it will explain the emergence of a subjective present and let us understand how conscious experience, in its simplest and most essential form, is the presence of a world.

Interestingly, today almost everybody in the consciousness community already agrees on some version of the fourth story: Consciousness is directly linked to attentional processing, more precisely, to a hidden mechanism constantly holding information available for attention. The subjective presence of a world is a clever strategy of making integrated information available for attention.

I believe, but cannot prove, that this will allow us to find the global neural correlate for consciousness. However, being a philosopher, I want much more than that—I am also interested in precise concepts. What I will be waiting for is the young mathematician who then comes along and suddenly allows us to see how all of these four stories were actually only one: The genius who gives us a formal model describing the information flow in this neural correlate, and in just the right way. She will harvest the fruits of generations or researchers before her, and this will be the First Breakthrough on Consciousness.

Then three things will happen.

1. The Second Breakthrough on Consciousness will take much longer. Things will get messy and complicated. The philosophy and neuroscience of consciousness will get bogged down in diabolic details and ugly technical problems. Public attention will soon shift away from the problem of consciousness per se. Instead, new generations of young researchers will now focus on the nature of self and social cognition.

2. The overall development will have an unexpectedly strong cultural impact. People will not want to face their own mortality. There will be fundamentalist and anti-rational counter movements against the scientific image of man. At the same time crude new ideologies propagating vulgar forms of materialism and primitive forms of hedonism will spring up. Scientists will realize that one can not reductively explain the human mind and then simply look another way, leaving the consequences for someone else to deal with.

3. We will be able to influence consciousness in ways we have never dreamt of. There will be a new form of technology—Consciousness Technology—exclusively focusing on how to manipulate the neural correlate of consciousness in ever more fine-grained, efficient, and risk-free ways. People will realize that we need some sort of applied ethics for this new type of technology. And hopefully we will all together start to tell a new story—a story about how to live with these brains and about what a good state of consciousness actually is.


JEAN PAUL SCHMETZ
Economist; Managing Director of CyberLab Interactive Productions GmbH (Burda Media Group).

When considering this question one has to remember the basis of the scientific method: formulating hypotheses that can be disproved. Those hypotheses that are not disproved are thought to be true until disproved. Since it is more glamorous for a scientist to formulate hypotheses that it is to spend years disproving existing ones from other scientists and that it is unlikely that someone will spend enough time and energy trying to disprove his/her own statements, our body of scientific knowledge is surely full of statements we believe to be true but will eventually be proved to be false.

So I turn the question around: What scientific ideas that have not been disproved, do you believe are false.

In my field (theoretical economics), I believe that most ideas taught in economics 101 will be proved false eventually. Most of them would already have been officially defined as false in any other more hard-science, but, because of lack of better hypotheses they are still widely accepted and used in economics and general commentary. Eventually, someone will come up with another type of hypotheses explaining (and predicting) the economic reality in a way that will render most existing economics beliefs false.


RICHARD DAWKINS
Evolutionary Biologist, Oxford University; Author, The Ancestor's Tale


I believe that all life, all intelligence, all creativity and all 'design' anywhere in the universe, is the direct or indirect product of Darwinian natural selection. It follows that design comes late in the universe, after a period of Darwinian evolution. Design cannot precede evolution and therefore cannot underlie the universe.


ALEX (SANDY) PENTLAND
Computer Scientist, MIT Media Laboratory

Tribal Mind.

What would it be like to be part of a distributed intelligence but still with an individual consciousness? Well for starters, you might expect to see the collective mind 'take over' from time to time, directly guiding the individual minds. In humans, the behavior of angry mobs and frightened crowds seem to qualify as examples of a 'collective mind' in action, with non-linguistic channels of communication usurping the individual capacity for rational behavior.

But as powerful as this sort of group compulsion can be, it is usually regarded as simply a failure of individual rationality, a primitive behavioral safety net for the tribe in times of great stress. Surely this tribal mind doesn't operate in normal day-to-day behavior—or does it? If we imagined that human behavior was in substantial part due to a collective tribal mind, you would expect that non-linguistic social signaling—the type that drives mob behavior—would be predictive of even the most rational and important human interactions. Analogous with the wiggle dance of the honeybee, there ought to be non-linguistic signals that accurately predict important behavioral outcomes.

And that is exactly what I find. Together with my research group I have built a computer system that objectively measures a set of non-linguistic social signals, such as engagement, mirroring, activity, and stress, by looking at 'tone of voice' over one minute time periods. Although people are largely unconscious of this type of behavior, other researchers (Jaffe, Chartrand and Bargh, France, Kagen) have shown that similar measurements are predictive of infant language development, judgments of empathy, depression, and even personality development in children. Working with colleagues, we have found that we can use these measurements of social signaling to automatically predict a wide range of important behavioral outcomes—objective, instrumental, and subjective—with high accuracy, accounting for between 30% and 80% of the total outcome variance.

Examples of objective and instrumental behaviors where we can accurately predict the outcome include salary negotiations, dating decisions, and role in the social network. Examples of subjective predictions include hiring preferences, empathy perceptions, and interest ratings. Even for lengthy interactions, accurate predictions can be made by observing only the initial few minutes of interaction, even though the linguistic content of these 'thin slices' of the behavior seem to have little predictive power.

I find all of this astounding. We are examining some of the most important interactions a human has: finding a mate, getting a job, negotiating a salary, finding your place in your social network. These are activities for which we prepare intellectually and strategically for decades. And yet the largely unconscious social signaling that occurs at the start of the interaction appears to be more predictive than either the contextual facts (is he attractive? is she experienced?) or the linguistic structure (e.g., strategy chosen, arguments employed, etc.).

So what is going on here? One might speculate that the social signaling we are measuring evolved as a method of establishing tribal hierarchy and cohesion, analogous to Dunbar's view that language evolved as grooming behavior. On this view the tribal mind would function as unconscious collective discussion about relationships and resources, risks and rewards, and would interact with the conscious individual minds by filtering ideas by their value relative to the tribe. Our measurements tap into the discussion, and predict outcome by use of social regularities. For instance, in a salary negotiation it is important for the lower-status individual to establish that they are 'team player' by being empathetic, while in a potential dating situation the key variable is the female's level of interest. In our data there seem to be patterns of signaling that reliably lead to these desired states.

One question to ask about this social signaling is whether or not it is an independent channel of communication, e.g., is it causal or do the signals arise from the linguistic structure? We don't have the full answer to that yet, but we do know that similar measurements predict infant language and personality development, that adults can change their signaling by adopting different roles or identities within a conversation, and in our studies the linguistic and factual content seems uncorrelated with the pattern or intensity of social signaling. So even if social signaling turns out to be only an adjunct to normal linguistic structure, it is a very interesting addition: it is a little like having speech annotated with speaker intent!

So here is what I suspect but can not prove: a very large proportion of our behavior is determined by largely unconscious social signaling, which sets the context, risk, and reward structure within which traditional cognitive processes proceed. This conjecture resonates with Pinker's view about brain complexity, and with Kosslyn's thoughts about social prosthetic systems. It is also provides a concrete mechanism for the well-known processes of group polarization ('the risky shift'), groupthink, and the sometimes irrational behaviors of larger groups. In short, it may be useful to starting thinking of humans as having a collective, tribal mind in addition to their personal mind.


JARON LANIER
Computer Scientist and Musician

My career has been guided by just the sort of unproven guess this year's question seeks.

My belief is that the potential for expanded communication between people far exceeds the potential both of language as we think of it (the stuff we say, read and write) and of all the other communication forms we already use.

Suppose for a moment that children in the future will grow up with an easy and intimate virtual reality technology and that their use of it will become focused on invention and design instead of the consumption of pre-created holo-video games, surround movies, or other content.

Maybe these future children will play virtual musical instrument-like things that cause simulated trees and spiders and seasons and odors and ecologies to spring up just as manipulating a pencil causes words to appear on a page. If people grew up with a virtuosic ability to improvise the contents of a shared virtual world, a new sort of communication might also appear.

It's barely possible to imagine what a "reality conversation" would be like. Each person would be changing the shared world at the speed of language, all at once, an image that suggests chaos, but often there would be a coherence, which would indicate meaning. A kid becomes a monster, eats his little brother, who becomes a vitriolic turd, and so on.

This is what I've called "Post-symbolic communication," though really it won't exist in isolation of or in opposition to symbolic communication techniques. It will be something different, however, and will expand what people can mean to each other.

Post-symbolic communication will be like a shared, waking state, intentional dream. Instead of the word "house", you will express a particular house and be able to walk into it, and instead of the category "house" you will peer into an apparently small bucket that is big enough inside to hold all the universe's houses so you can assess what they have in common directly. It will be a fluid form of experiential concreteness providing similar but divergent expressive power to that of abstraction.

Why care? The acquisition of post-symbolic communication will be a centuries-long adventure, an expansion of meaning, something beautiful, and a way to seek cool, advanced technology that focuses on connection instead of mere power. It will be a form of beauty which also enhances survivability; Since the drive for "cool tech" is unstoppable, the invention of provocative cool tech that is lovely enough to seduce the attention of young smart men away from arms races is a prerequisite to the survival of the species.

Some of the examples above (houses, spiders) are of people improvising the external environment, but post-symbolic communication might typically look a lot more like people morphing themselves into varied forms. Experiments have already been conducted with kids wearing special body suits and goggles "turning into" triangles to learn trigonometry, or molecules to learn chemistry.

It's not only the narcissism of the young (and not so young) human mind or the primality of the control of one's own body that makes self-transformation compelling. Evolution, as generous as she ended up being with us humans, was stingy with potential means of expression. Compare us with the mimic octopus which can morph into all sorts of creatures and objects, and can animate its skin. An advanced civilization of cephalopods might develop words as we know them, but probably only as an adjunct to a natural form of post-symbolic communication.

We humans can control precious little of the world with enough agility to keep up with our thoughts and feelings. The fingers and the tongue are the about it. Symbols as we know them in language are a trick, or what programmers call a "hack," that expands the power of little appendage wiggles to refer to all that we can't instantly become or create. Another belief: The tongue that can speak could also someday control fantastic forms beyond our current imaginings. (Some early experiments along these lines have been done, using ultrasound sensing through the cheek. and the results are at least not terrible.)

While we're confessing unprovable beliefs, here's another one: The study of the genetic components of pecking order behavior, group belief cues, and clan identification leading to inter-clan hostility will be the core of psychology and sociology for the next few generations, and it will turn out we can't turn off or control these elements of human character without losing other qualities we love, like creativity. If this dark guess is correct, then the means to survival is to create societies with a huge variety of paths to success and a multitude of overlapping, intertwined clans and pecking orders, so that everyone can be a winner from equally valid individual perspectives. When the American experiment has worked best, it has approximated this level of variety. The virtual worlds of post-symbolic communication can provide the highest level of variety to satisfy the dangerous psychic inheritance I'm guessing we suffer as a species.

Implicit in the futures I am imagining here is a solution to the software crisis. If children are breathing out fully realized creatures and skies just as they form sentences today, there must be software present which isn't crashing and is marvelously flexible and responsive, yet free of limiting pre-conceptions, which would revive symbolism. Can such software exist? Ah! Another belief! My guess is it can exist, but not anytime soon. The only two good examples of software we have at this time are evolution and the brain, and they both are quite good, so why not be encouraged?

The beliefs I chose for this response are not fundamentally untestable. They might be tested someday, perhaps in a few centuries. It's not impossible that medical progress could keep me alive long enough to participate in testing them, so strictly speaking I can't guarantee that I can't ever prove these beliefs to be true.

There are not too many potential beliefs that could really never be tested by anyone ever.

Consciousness, meaning, truth, and free will and their endless permutations just about complete the list. The reason philosophy is so much harder to talk about than science is that there's so little to talk about. It quickly becomes almost impossible to distinguish repetition from resonance.

Proposals like post-symbolic communication, however, frame questions about meaning that are small enough to be fresh and useful. Am I right that there can be meaning outside of words, or are the word-as-center-of-meaning folks correct?


JOHN BARROW
Cosmologist, Cambridge University; Author, The Infinite Book


That our universe is infinite in size, finite in age, and just one among many. Not only can I not prove it but I believe that these statements will prove to be unprovable in principle and we will eventually hold that principle to be self-evident.


RAY KURZWEIL
Inventor and Technologist; Author, The Age of Spiritual Machines

We will find ways to circumvent the speed of light as a limit on the communication of information.

We are expanding our computers and communication systems both inwardly and outwardly. Our chips use every smaller feature sizes, while at the same time we deploy greater amounts of matter and energy for computation and communication (for example, we're making a larger number of chips each year). In one to two decades, we will progress from two-dimensional chips to three-dimensional self-organizing circuits built out of molecules. Ultimately, we will approach the limits of matter and energy to support computation and communication.

As we approach an asymptote in our ability to expand inwardly (that is, using finer features), computation will continue to expand outwardly, using readily available materials on Earth such as carbon. But we will eventually reach the limits of the resources available on our planet, and will expand outwardly to the rest of the solar system and beyond.

So how quickly will we be able to do this? We could send tiny self-replicating robots at close to the speed of light along with electromagnetic transmissions containing the needed software. These nanobots could then colonize far-away planets.

At this point, we run up against a seemingly intractable limit: the speed of light. Although a billion feet per second may seem fast, the Universe is spread out over such vast distances that this appears to represent a fundamental limit on how quickly an advanced civilization (such as we hope to become) can spread its influence.

There are suggestions, however, that this limit is not as immutable as it may appear. Physicists Steve Lamoreaux and Justin Torgerson of the Los Alamos National Laboratory have analyzed data from an old natural nuclear reactor that two billion years ago produced a fission reaction lasting several hundred thousand years in what is now West Africa. Analyzing radioactive isotopes left over from the reactor and comparing them to isotopes from similar nuclear reactions today, they determined that the physics constant "alpha" (also called the fine structure constant), which determines the strength of the electromagnetic force apparently has changed since two billion years ago. The speed of light is inversely proportional to alpha, and both have been considered unchangeable constants. Alpha appears to have decreased by 4.5 parts out of 108. If confirmed, this would imply that the speed of light has increased. There are other studies with similar suggestions, and there is a table top experiment now under way at Cambridge University to test the ability to engineer a small change in the speed of light.

Of course, these results will need to be carefully verified. If true, it may hold great importance for the future of our civilization. If the speed of light has increased, it has presumably done so not just because of the passage of time, but because certain conditions have changed. This is the type of scientific insight that technologists can exploit. It is the nature of engineering to take a natural, often subtle, scientific effect, and control it with a view towards greatly leveraging and magnifying it. If the speed of light has changed due to changing circumstances, that cracks open the door just enough for the capabilities of our future intelligence and technology to swing the door widely open. That is the nature of engineering. As one of many examples, consider how we have focused and amplified the subtle properties of Bernoulli's principle (that air rushing over a curved surface has a slightly lower air pressure than over a flat surface) to create the whole world of aviation.

If it turns out that we are unable to actually change the speed of light, we may nonetheless circumvent it by using wormholes (which can be thought of as folds of the universe in dimensions beyond the three visible ones) as short cuts to far away places.

In 1935, Einstein and physicist Nathan Rosen described "Einstein-Rosen" bridges as a way of describing electrons and other particles in terms of tiny space-time tunnels. In 1955, physicist John Wheeler described these tunnels as "wormholes," introducing the term for the first time. His analysis of wormholes showed them to be fully consistent with the theory of general relativity, which describes space as essentially curved in another dimension.

In 1988, California Institute of Technology physicists Michael Morris, Kip Thorne, and Uri Yertsever described in some detail how such wormholes could be engineered. Based on quantum fluctuation, so-called "empty" space is continually generating tiny wormholes the size of subatomic particles. By adding energy and following other requirements of both quantum physics and general relativity (two fields that have been notoriously difficult to integrate), these wormholes could in theory be expanded in size to allow objects larger than subatomic particles to travel through them. Sending humans would not be impossible, but extremely difficult. However, as I pointed out above, we really only need to send nanobots plus information, which could go through wormholes measured in microns rather than meters. Anders Sandberg estimates that a one-nanometer wormhole could transmit a formidable 10^69 bits per second.

Thorne and his Ph.D. students, Morris and Yertsever, also describe a method consistent with general relativity and quantum mechanics that could establish wormholes between Earth and far-away locations quickly even if the destination were many light-years away.

Physicist David Hochberg and Vanderbilt University's Thomas Kephart point out that shortly after the Big Bang, gravity was strong enough to have provided the energy required to spontaneously create massive numbers of self-stabilizing wormholes. A significant portion of these wormholes are likely to still be around, and may be pervasive, providing a vast network of corridors that reach far and wide throughout the Universe. It might be easier to discover and use these natural wormholes than to create new ones.

Would anyone be shocked if some subtle ways of getting around the speed of light were discovered? The point is that if there are even subtle ways around this limit, the technological powers that our future human-machine civilization will achieve will discover these means and leverage them to great effect.


STEWART KAUFFMAN
Biologist, Santa Fe Institute; Author, Investigations

Is there a fourth law of thermodynamics, or some cousin of it, concerning self constructing non equilibrium systems such as biospheres anywhere in the cosmos?

I like to think there may be such a law.

Consider this, the number of possible proteins 200 amino acids long is 20 raised to the 200th power or about 10 raised to the 260th power. Now, the number of particles in the known universe is about 10 to the 80th power. Suppose, on a microsecond time scale the universe were doing nothing other than producing proteins length 200. It turns out that it would take vastly many repeats of the history of the universe to create all possible proteins length 200. This means that, for entities of complexity above atoms, such as modestly complex organic molecules, proteins, let alone species, automobiles and operas, the universe is on a unique trajectory (ignoring quantum mechanics for the moment). That is, the universe at modest levels of complexity and above is vastly non-ergodic.

Now conceive of the "adjacent possible", the set of entities that are one "step" away from what exists now. For chemical reaction systems, the adjacent possible from a set of compounds already existing (called the "actual" ) is just the set of novel compounds that can be produced by single chemical reactions among the initial "actual" set. Now, the biosphere has expanded into its molecular adjacent possible since 4.8 billion years ago.

Before life, there were perhaps a few hundred organic molecule species on the earth. Now there are perhaps a trillion or more. We have no law governing this expansion into the adjacent possible in this non-ergodic process. My hoped for law is that biospheres everywhere in the universe expand in such a way that they do so as fast as is possible while maintaining the rough diversity of what already exists. Otherwise stated, the diversity of things that can happen next increases on average as fast as it can.


< previous

1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |10

next >

|Top|