| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |

next >

Professor of Mathematical Physics, Tulane University; Author, The Physics of Christianity


I'm 62, so I'll have to limit my projections to what I expect to happen in the next two to three decades. I believe these will be the most interesting times in human history (Remember the old Chinese curse about "interesting times?") Humanity will see, before I die, the "Singularity," the day when we finally create a human level artificial intelligence. This involves considering the physics advances that will be required to create the computer that is capable of running a strong AI program.

Although by both my calculations and those of Ray Kurzweil (originator of the "Singularity" idea), the 10 teraflop speed of today's supercomputers have more than enough computing power to run a minimum AI program, we are missing some crucial idea in this program. Conway's Game of Life has been proven to be a universal program, capable of expressing a strong AI program, and it should therefore be capable, if allowed to run long enough, of bootstrapping itself into the complexity of human level intelligence. But Game of Life programs do no do so. They increase their complexity just so far, and then stop. Why, we don't know. As I said, we are missing something, and what we are missing is the key to human creativity.

But an AI program can be generated by brute force. We can map an entire human personality, together with a simulated environment into a program, and run this. Such a program would be roughly equivalent to the program being run in the movie The Matrix, and it would require enormous computing power, power far beyond today's supercomputers. The power required can only be provided by a quantum computer.

A quantum computer works by parallel processing across the multiverse. That is, part of the computation is done in this universe by you and your part of the quantum computer, and the other parts of the computation are done by your analogues with their parts of the computer in the other universes of the multiverse. The full potential of the quantum computer has not been realized because the existence of the multiverse has not yet been accepted, even by workers in the field of quantum computation, in spite of the fact that the multiverse's existence is required by quantum mechanics, and by classical mechanics in its most powerful form, Hamilton-Jacobi theory.

Other new technologies become possible via action across the multiverse. For example, the Standard Model of particle physics, the theory of all forces and particles except gravity, a theory confirmed by many experiments done over the past forty years, tells us that it is possible to transcend the laws of conservation of baryon number (number of protons plus neutrons) and conservation of lepton number (number of electrons plus neutrinos) and thereby convert matter into energy in a process far more efficient that nuclear fission or fusion. According to the Standard Model, the proton and electron making up a hydrogen atom can be combined to yield pure energy in the form of photons, or neutrino-anti-neutrino pairs. If the former, then we would have a mechanism that would allow us to convert garbage into energy, a device Doc in the movie Back to the Future obtained from his trip to the future. If the latter, then the directed neutrino-anti-neutrino beam would provide the ultimate rocket: the exhaust would be completely invisible to those nearby, just as the propulsion mechanism that Doc also obtained from the future. The movie writers got it right, Doc's future devices are indeed in our future.

Quantum computer running an AI program, direct conversion of matter into energy, the ultimate rocket that would allow the AI's and the human downloads to begin interstellar travel at near light speed, depend on the same physics, and should appear at the same time in the future.

Provided we have the courage to develop the technology allowed by the known laws of physics. I have grave doubts that we will.

In order to have advances in physics and engineering, one must first have physicists and engineers. The number of students majoring in these subjects has dropped enormously in the quarter century that I have been a professor. Worse, the quality of the few students we do have has dropped precipitously. The next decade will see the retirement of Stephen Hawking, and others less well-known but of similar ability, but I know of no one of remotely equal creativity to replace them. Small wonder, given that the starting salary of a Wall Street lawyer fresh out of school is currently three times my own physicist's salary. As a result, most American engineers and physicists are now foreign born.

But can foreign countries continue to supply engineers and physicists? That is, will engineers and physicists be available in any country? The birth rate of the vast majority of the developed nations has been far below replacement level for a decade and more. This birth dearth also holds for China, due to their one-child policy, and remarkably is developing even in the Muslim and southern nations. We may not have enough people in the next twenty years to sustain the technology we already have, to say nothing of developing the technology allowed by the known laws of physics that I describe above.

The great Galileo scholar Giorgio de Santillana, who taught me history of science when I was an undergraduate at MIT in the late 1960's, wrote that Greek scientific development ended in the century or so before the Christian era because of a birth dearth and a simultaneous bureaucratization of intellectual inquiry. I fear we are seeing a repeat of this historical catastrophe today.

However, I remain cautiously optimistic that we will develop the ultimate technology described above, and transfer it with faltering hands to our ultimate successors, the AI's and the human downloads, who will be thus enabled to expand outward into interstellar space, engulf the universe, and live forever.

Computational Neuroscientist, Salk Institute, Coauthor, The Computational Brain


Scientific ideas change when new instruments are developed that detect something new about nature. Electron microscopes, radio telescopes, and patch recordings from single ion channels have all led to game-changing discoveries.

We are in the midst of a technological revolution in computing that has been unfolding since 1950 and is having a profound impact on all areas of science and technology. As computing power doubles every 18 months according to Moore's Law, unprecedented levels of data collection, storage and analysis have revolutionized many areas of science.

For example, optical microscopy is undergoing a renaissance as computers have made it possible to localize single molecules with nanometer precision and image the extraordinary complex molecular organization inside cells. This has become possible because computers allow beams to be formed and photons collected over long stretches of time, perfectly preserved and processed into synthetic pictures. High resolution movies are revealing the dynamics of macromolecular structures and molecular interactions for the first time.

In trying to understand brain function we have until recently relied on microelectrode technology that limited us to recording from one neuron at a time. Coupled with advances in molecular labels and reporters, new two-photon microscopes guided by computers will soon make it possible to image the electrical activity and chemical reactions occurring inside millions of neurons simultaneously. This will realize Sherrington's dream of seeing brain activity as an "enchanted loom where millions of flashing shuttles weave a dissolving pattern, always a meaningful pattern though never an abiding one; a shifting harmony of subpatterns."

By 2015 computer power will begin to approach the neural computation that occurs in brains. This does not mean we will be able to understand it, only that we can begin to approach the complexity of a brain on its own terms. Coupled with advances in large-scale recordings from neurons we should by then be in a position to crack many of the brain's mysteries, such as how we learn and where memories reside. However, I would not expect a computer model of human level intelligence to emerge from these studies without other breakthroughs that cannot be predicted.

Computers have become the new microscopes, allowing us to see behind the curtains. Without computers none of this would be possible, at least not in my lifetime.

Research Professor, Department of Anthropology, Rutgers University; Author, Why We Love


"Mind is primarily a verb," wrote philosopher John Dewey. Every time we do or think or feel anything the brain is doing something. But what? And can we use what scientists are learning about these neural gymnastics to get what we want? I think we can and we will, in my life time, due to some mind—bending developments in contemporary neuroscience. Brain scanning; genetic studies; antidepressant drug use; estrogen replacement therapy; testosterone patches; L-dopa and newer drugs to prevent or retard brain diseases; recreational drugs; sex change patients; gene doping by athletes: all these and other developments are giving us data on how the mind works—and opening new avenues to use brain chemistry to change who we are and what we want. As the field of epigenetics takes on speed, we are also beginning to understand how the environment affects brain systems, even turns genes on and off—further enabling us (and others) to adjust brain chemistry, affecting who we are, how we feel and what we think we need.

But is this new? Our forebears have been manipulating brain chemistry for millions of years. Take "hooking up," the current version of the "one night stand," one of humankind's oldest forms of chemical persuasion. During sex, stimulation of the genitals escalates activity in the dopamine system, the neurotransmitter network that my colleagues and I have found to be associated with feelings of romantic love. And with orgasm you experience a flood of oxytocin and vasopressin, neurochemicals associated with feelings of attachment. Casual sex isn't always casual. And I suspect our ancestors seduced their peers to (unconsciously) alter their brain chemistry, thereby nudging "him" or "her" toward feelings of passion and/or attachment. Indeed, this chemical persuasion works. In a recent study of 507 college students, anthropologist Justin Garcia found that 50% of women and 52% of men hopped into bed with an acquaintance or a stranger in hopes of starting a longer relationship. And about one third of these hook ups turned into romance.

In 1957 Vance Packard wrote The Hidden Persuaders to unmask the subtle psychological techniques that advertisers use to manipulate people's feelings and induce them to buy. We have long been using psychology to persuade other's minds. But now we are learning why our psychological strategies work. Holding hands, for example, generates feelings of trust, in part, because it triggers oxytocin activity. As you see another person laugh, you naturally mimic him or her, moving muscles in your face that trigger nerves to alter your neurochemistry so that you feel happy too. That's one reason why we feel good when we are around happy people. "Mirror neurons" also enable us to feel what another feels. Novelty drives up dopamine activity to make you more susceptible to romantic love. The placebo effect is real. And wet kissing transfers testosterone in the saliva, helping to stimulate lust.

The black box of our humanity, the brain, is inching open. And as we peer inside for the first time in human time, you and I will hold the biological codes that direct our deepest wants and feelings. We have begun to use these codes too. I, for example, often tell people that if they want to ignite or sustain feelings of romantic love in a relationship, they should do novel and exciting things together—to trigger or sustain dopamine activity. Some 100 million prescriptions for antidepressants are written annually in the United States. And daily many alter who we are in other chemical ways. As scientists learn more about the chemistry of trust, empathy, forgiveness, generosity, disgust, calm, love, belief, wanting and myriad other complex emotions, motivations and cognitions, even more of us will begin to use this new arsenal of weapons to manipulate ourselves and others. And as more people around the world use these hidden persuaders, one by one we may subtly change everything.

Assistant Professor of Psychology, Neuroscience, and Symbolic Systems, Stanford University


There is an old joke about a physicist, a biologist, and an epistemologist being asked to name the most impressive invention or scientific advance of modern times. The physicist does not hesitate—"It is quantum theory. It has completely transformed the way we understand matter." The biologist says "No. It is the discovery of DNA—it has completely transformed the way we understand life." The epistemologist looks at them both and says "I think it's the thermos." The thermos? Why on earth the thermos? "Well," the epistemologist explains patiently, "If you put something cold in it, it will keep it cold. And if you put something hot in it, it will keep it hot." Yeah, so what?, everyone asks. "Aha!" the epistemologist raises a triumphant finger "How does it know?"

With this in mind, it may seem foolhardy to claim that epistemology will change the world. And yet, that is precisely what I intend to do here. I think that knowledge about how we know will change everything. By understanding the mechanisms of how humans create knowledge, we will be able to break through normal human cognitive limitations and think the previously unthinkable.

The reason the change is happening now is that modern Cognitive Science has taken the role of empirical epistemology. The empirical approach to the origins of knowledge is bringing about breathtaking breakthroughs and turning what once were age-old philosophical mysteries into mere scientific puzzles.

Let me give you an example. One of the great mysteries of the mind is how we are able to think about things we can never see or touch. How do we come to represent and reason about abstract domains like time, justice, or ideas? All of our experience with the world is physical, accomplished through sensory perception and motor action. Our eyes collect photons reflected by surfaces in the world, our ears receive air-vibrations created by physical objects, our noses and tongues collect molecules, and our skin responds to physical pressure. In turn, we are able to exert physical action on the world through motor responses, bending our knees and flexing our toes in just the right amount to defy gravity. And yet our internal mental lives go far beyond those things observable through physical experience; we invent sophisticated notions of number and time, we theorize about atoms and invisible forces, and we worry about love, justice, ideas, goals, and principles. So, how is it possible for the simple building blocks of perception and action to give rise to our ability to reason about domains like mathematics, time, justice, or ideas?

Previous approaches to this question have vexed scholars. Plato, for example, concluded that we cannot learn these things, and so we must instead recollect them from past incarnations of our souls. As silly as this answer may seem, it was the best we could do for several thousand years. And even some of our most elegant and modern theories (e.g., Chomskyan linguistics) have been awkwardly forced to conclude that highly improbable modern concepts like ‘carburetor' and ‘bureaucrat' must be coded into our genes (a small step forward from past incarnations of our souls).

But in the past ten years, research in cognitive science has started uncovering the neural and psychological substrates of abstract thought, tracing the acquisition and consolidation of information from motor movements to abstract notions like mathematics and time. These studies have discovered that human cognition, even in its most abstract and sophisticated form, is deeply embodied, deeply dependent on the processes and representations underlying perception and motor action. We invent all kinds of complex abstract ideas, but we have to do it with old hardware: machinery that evolved for moving around, eating, and mating, not for playing chess, composing symphonies, inventing particle colliders, or engaging in epistemology for that matter. Being able to re-use this old machinery for new purposes has allowed us to build tremendously rich knowledge repertoires. But it also means that the evolutionary adaptations made for basic perception and motor action have inadvertently shaped and constrained even our most sophisticated mental efforts. Understanding how our evolved machinery both helps and constrains us in creating knowledge, will allow us to create new knowledge, either by using our old mental machinery in yet new ways, or by using new and different machinery for knowledge-making, augmenting our normal cognition.

So why will knowing more about how we know change everything? Because everything in our world is based on knowledge. Humans, leaps and bounds beyond any other creatures, acquire, create, share, and pass on vast quantities of knowledge. All scientific advances, inventions, and discoveries are acts of knowledge creation. We owe civilization, culture, science, art, and technology all to our ability to acquire and create knowledge. When we study the mechanics of knowledge building, we are approaching an understanding of what it means to be human—the very nature of the human essence. Understanding the building blocks and the limitations of the normal human knowledge building mechanisms will allow us to get beyond them. And what lies beyond is, well, yet unknown...

Science Writer; Consultant; Lecturer, Copenhagen; Author, The Generous Man


Understanding that the outside world is really inside us and the inside world is really outside us will change everything. Both inside and outside. Why?

"There is no out there out there", physicist John Wheeler said in his attempt to explain quantum physics. All we know is how we correlate with the world. We do not really know what the world is really like, uncorrelated with us. When we seem to experience an external world that is out there, independent of us, it is something we dream up.

Modern neurobiology has reached the exact same conclusion. The visual world, what we see, is an illusion, but then a very sophisticated one. There are no colours, no tones, no constancy in the "real" world, it is all something we make up. We do so for good reasons and with great survival value. Because colors, tones and constancy are expressions of how we correlate with the world.

The merging of the epistemological lesson from quantum mechanics with the epistemological lesson from neurobiology attest to a very simple fact: What we percieve as being outside of us is indeed a fancy and elegant projection of what we have inside. We do make this projection as as result of interacting with something not inside, but everything we experience is inside.

Is it not real? It embodies a correlation that is very real. As physicist N. David Mermin has argued, we do have correlations, but we do not know what it is that correlates, or if any correlata exists at all. It is a modern formulation of quantum pioneer Niels Bohr's view: "Physics is not about nature, it is about what we can say about nature."

So what is real, then? Inside us humans a lot of relational emotions exists. We feel affection, awe, warmth, glow, mania, belonging and refusal towards other humans and to the world as a whole. We relate and it provokes deep inner emotional states. These are real and true, inside our bodies and percieved not as "real states" of the outside world, but more like a kind of weather phenomena inside us.

That raises the simple question: Where do these internal states come from? Are they an effect of us? Did we make them or did they make us? Love exists before us (most of us were conceived in an act of love). Friendship, family bonds, hate, anger, trust, distrust, all of these entities exist before the individual. They are primary. The illusion of the ego denies the fact that they are there before the ego consciously decided to love or hate or care or not. But the inner states predate the conscious ego. And they predate the bodily individual.

The emotional states inside us are very, very real and the product of biological evolution. They are helpful to us in our attempt to survive. Experimental economics and behavioral sciences have recently shown us how important they are to us as social creatures: To cooperate you have to trust the other party, even though a rational analysis will tell you that both the likelihood and the cost of being cheated is very high. When you trust, you experience a physiologically detectable inner glow of pleasure. So the inner emotional state says yes. However, if you rationally consider the objects in the outside world, the other parties, and consider their trade-offs and motives, you ought to choose not to cooperate. Analyzing the outside world makes you say no. Human cooperation is dependent on our giving weight to what we experience as the inner world compared to what we experience as the outer world.

Traditionally, the culture of science has denied the relevance of the inner states. Now, they become increasingly important to understanding humans. And highly relevant when we want to build artefacts that mimic us.

Soon we will be building not only Artificial Intelligence. We will be building Artificial Will. Systems with an ability to convert internal decisions and values into external change. They will be able to decide that they want to change the world. A plan inside becomes an action on the outside. So they will have to know what is inside and outside.

In building these machines we ourselves will learn something that will change everything: The trick of perception is the trick of mistaking an inner world for the outside world. The emotions inside are the evolutionary reality. The things we see and hear outside are just elegant ways of imagining correlata that can explain our emotions, our correlations. We don't hear the croak, we hear the frog.

When we understand that the inner emotional states are more real than what we experience as the outside world, cooperation becomes easier. The epoch of insane mania for rational control will be over.

What really changes is they way we see things, the way we experience everything. For anything to change out there you have to change everything in here. That is the epistemological situation. All spiritual traditions have been talking about it. But now it grows from the epistemology of quantum physics, neurobiology and the building of robots.

We will be sitting there, building those Artificial Will-robots. Suddenly we will start laughing. There is no out there out there. It is in here. There is no in here in here. It is out there. The outside is in here. Who is there?

That laughter will change everything.

Professor, Financial Engineering, Columbia University; Principal, Prisma Capital Partners; Former Head, Quantitative Strategies Group, Equities Division, Goldman Sachs & Co.; Author, My Life as a Quant


The biggest game-changer looming in your future, if not mine, is Life Prolongation. It works for mice and worms, and surely one of these days it'll work for the rest of us.

The current price for Life Prolongation seems to be semi-starvation; the people who try it wear loose clothes to hide their ribs and intentions. There's something desperate and shameful about starving yourself in order to live longer. But right now biologists are tinkering with reservatrol and sirtuins, trying to get you the benefit of life prolongation without cutting back on calories.

Life and love gets their edge from the possibility of their ending. What will life be like when we live forever? Nothing will be the same.

The study of financial options shows that there is no free lunch. What you lose on the swings you gain on the roundabouts. If you want optionality, you have to pay a price, and part of that price is that the value of your option erodes every day. That's time decay. If you want a world where nothing fades away with time anymore, it will be because because there's nothing to fade away.

No one dies. No one gets older. No one gets sick. You can't tell how old someone is by looking at them or touching them. No May-September romances. No room for new people. Everyone's an American car in Havana, endlessly repaired and maintained long after its original manufacturer is defunct. No breeding. No one born. No more evolution. No sex. No need to hurry. No need to console anyone. If you want something done, give it to a busy man, but no one need be busy when you have forever. Life without death changes absolutely everything.

If everyone is an extended LP, the turntable has to turn very slowly.

Who's going to do the real work, then? Chosen people who will volunteer or be volunteered to be mortal.

If you want things to stay the same, then things will have to change (Giuseppe di Lampedusa in The Leopard).

Consultant, Adaptive Optics; Adjunct Professor of Anthropology, University of Utah; Coauthor, The 10,000 Year Explosion


Our most reliable engine of change has been increased understanding of the physical world. First it was Galilean dynamics and Newtonian gravity, then electromagnetism, later quantum mechanics and relativity. In each case, new observations revealed new physics, physics that went beyond the standard models—physics that led to new technologies and to new ways of looking at the universe. Often those advances were the result of new measurement techniques. The Greeks never found artificial ways of extending their senses, which hobbled their protoscience. But ever since Tycho Brahe, a man with a nose for instrumentation, better measurements have played a key role in Western science.

We can expect significantly improved observations in many areas over the next decade. Some of that is due to sophisticated, expensive, and downright awesome new machines. The Large Hadron Collider should begin producing data next year, and maybe even information. We can scan the heavens for the results of natural experiments that you wouldn't want to try in your backward—events that shatter suns and devour galaxies—and we're getting better at that. That means devices like the 30-meter telescope under development by a Caltech-led consortium, or the 100-meter OWL (Overwhelmingly Large Telescope) under consideration by the European Southern Observatory. Those telescopes will actively correct for the atmospheric fluctuations which make stars twinkle—but that's almost mundane, considering that we have a neutrino telescope at the bottom of the Mediterranean and another buried deep in the Antarctic ice. We have the world's first real gravitational telescope (LIGO, the Laser Interferometer Gravitational-Wave Observatory) running now, and planned improvements should increase its sensitivity enough to study cosmic fender-benders in the neighborhood, as (for example) when two black holes collide. An underground telescope, of course….

There's no iron rule ensuring that revolutionary discoveries must cost an arm and a leg: ingenious experimentalists are testing quantum mechanics and gravity in table-top experiments, as well. They'll find surprises. When you think about it, even historians and archaeologists have a chance of shaking gold out of the physics-tree: we know the exact date of the Crab Nebula supernova from old Chinese records, and with a little luck we'll find some cuneiform tablets that give us some other astrophysical clue, as well as the real story about the battle of Kadesh…

We have a lot of all-too-theoretical physics underway, but there's a widespread suspicion that the key shortage is data, not mathematics. The universe may not be stranger than we can imagine but it's entirely possible that it's stranger than we have imagined thus far. We have string theory, but what Bikini test has it brought us? Experiments led the way in the past and they will lead the way again.

We will probably discover new physics in the next generation, and there's a good chance that the world will, as a consequence, become unimaginably different. For better or worse.

Communications Expert; Author, Smart Mobs


Social media literacy is going to change many games in unforeseeable ways. Since the advent of the telegraph, the infrastructure for global, ubiquitous, broadband communication media have been laid down, and of course the great power of the Internet is the democracy of access—in a couple of decades, the number of users has grown from a thousand to a billion. But the next important breakthroughs won't be in hardware or software but in know-how, just the most important after-effects of the printing press were not in improved printing technologies but in widespread literacy. The Gutenberg press itself was not enough. Mechanical printing had been invented in Korea and China centuries before the European invention. For a number of reasons, a market for print and the knowledge of how to use the alphabetic code for transmitting knowledge across time and space broke out of the scribal elite that had controlled it for millennia. From around 20,000 books written by hand in Gutenberg's lifetime, the number of books grew to tens of millions within decades of the invention of moveable type. And the rapidly expanding literate population in Europe began to create science, democracy, and the foundations of the industrial revolution. Today, we´re seeing the beginnings of scientific, medical, political, and social revolutions, from the instant epidemiology that broke out online when SARS became known to the world, to the use of social media by political campaigns. But we´re only in the earliest years of social media literacy. Whether universal access to many-to-many media will lead to explosive scientific and social change depends more on know-how now than physical infrastructure. Would the early religious petitioners during the English Civil War, and the printers who eagerly fed their need to spread their ideas have been able to predict that within a few generations, monarchs would be replaced by constitutions? Would Bacon and Newton have dreamed that entire populations, and not just a few privileged geniuses, would aggregate knowledge and turn it into technology? Would those of us who used slow modems to transmit black and white text on the early Internet 15 years ago been able to foresee YouTube?

Associate Professor of Psychology and Neuroscience; Stanford University


The fashionable phrase "game-changing" can imply not only winning a game (usually with a dramatic turnaround), but also changing the rules of the game. If we could change the rules of the mind, we would alter our perception of the world, which would change everything (at least for humans). Assuming that the brain is the organ of the mind, what are the brain's rules, and how might we transcend them? Technological developments that combine neurophenomics with targeted stimulation will offer answers within the next century.

In contrast to genomics, less talk (and funding) has been directed towards phenomics. Yet, phenomics is the logical endpoint of genomics (and a potential bottleneck for clinical applications). Phenomics has traditionally focused on a broad range of individual characteristics including morphology, biochemistry, physiology, and behavior. "Neurophenomics," however, might more specifically focus on patterns of brain activity that generate behavior. Advances in brain imaging techniques over the past two decades now allow scientists to visualize changes in the activity of deep-seated brain regions at a spatial resolution of less than a millimeter and a temporal resolution of less than a second. These technological breakthroughs have sparked an interdisciplinary revolution that will culminate in the mapping of a "neurophenome." The neural patterns of activity that make up the neurophenome may have genetic and epigenetic underpinnings, but can also respond dynamically to environmental contingencies. The neurophenome should link more closely than behavior to the genome, could have one-to-many or many-to-one mappings to behavior, and might ideally explain why groups of genes and behaviors tend to travel together. Although mapping the neurophenome might sound like a hopelessly complex scientific challenge, emerging research has begun to reveal a number of neural signatures that reliably index not only the obvious starting targets of sensory input and motor output, but also more abstract mental constructs like anticipation of gain, anticipation of loss, self-reflection, conflict between choices, impulse inhibition, and memory storage / retrieval (to name but a few...). By triangulating across different brain imaging modalities, the neurophenome will eventually point us towards spatially, temporally, and chemically specific targets for stimulation.

Targeted neural stimulation has been possible for decades, starting with electrical methods, and followed by chemical methods. Unfortunately, delivery of any signal to deep brain regions is usually invasive (e.g., requiring drilling holes in the skull and implanting wires or worse), unspecific (e.g., requiring infusion of neutoransmitter over minutes to distributed regions), and often transient (e.g., target structures die or protective structures coat foreign probes). Fortunately, better methods are on the horizon. In addition to developing ever smaller and more temporally precise electrical and chemical delivery devices, scientists can now nearly instantaneously increase or decrease the firing of specific neurons with light probes that activate photosensitive ion channels. As with the electrical and chemical probes, these light probes can be inserted into the brains of living animals and change ongoing behavior. But at present, scientists still have to insert invasive probes into the brain. What if one could deliver the same spatially and temporally targeted bolus of electricity, chemistry, or even light to a specific brain location without opening the skull? Such technology does not yet exist — but given the creativity, brilliance, and pace of recent scientific advances, I expect that relevant tools will emerge in the next decade (e.g., imagine the market for "triangulation helmets"...). Targeted and hopefully noninvasive stimulation, combined with the map that comprises the neurophenome, will revolutionize our ability to control our minds.

Clinical implications of this type of control are straightforward, yet startling. Both psychotherapy and pharmacotherapy look like blunt instruments by comparison. Imagine giving doctors or even patients the ability to precisely and dynamically control the firing of acetylcholine neurons in the case of dementia, dopamine neurons in the case of Parkinson's disease, or serotonin neurons in the case of unipolar depression (and so on...). These technological developments will not only improve clinical treatment, but will also advance scientific theory. Along with applications designed to cure will come demands for applications that aim to enhance. What if we could precisely but noninvasively modulate mood, alertness, memory, control, willpower, and more? Of course, everyone wants to win the brain game. But are we ready for the rules to change?

Researcher; Policy Advocate; Author, Engines of Creation


Human knowledge changes the world as it spreads, and the spread of knowledge can be observed. This makes some change predictable. I see great change flowing from the spread of knowledge of two scientific facts: one simple and obvious, the other complex and tangled in myth. Both are crucial to understanding the climate change problem and what we can do about it.

First, the simple scientific fact: Carbon stays in the atmosphere for a long time.

To many readers, this is nothing new, yet most who know this make a simple mistake. They think of carbon as if it were sulfur, with pollution levels that rise and fall with the rate of emission: Cap sulfur emissions, and pollution levels stabilize; cut emissions in half, cut the problem in half. But carbon is different. It stays aloft for about a century, practically forever. It accumulates. Cap the rate of emissions, and the levels keep rising; cut emissions in half, and levels will still keep rising. Even deep cuts won't reduce the problem, but only the rate of growth of the problem.

In the bland words of the Intergovernmental Panel on Climate Change, "only in the case of essentially complete elimination of emissions can the atmospheric concentration of CO2 ultimately be stabilised at a constant [far higher!] level." This heroic feat would require new technologies and the replacement of today's installed infrastructure for power generation, transportation, and manufacturing. This seems impossible. In the real world, Asia is industrializing, most new power plants burn coal, and emissions are accelerating, increasing the rate of increase of the problem.

The second fact (complex and tangled in myth) is that this seemingly impossible problem has a correctable cause: The human race is bad at making things, but physics tells us that we can do much better.

This will require new methods for manufacturing, methods that work with the molecular building blocks of the stuff that makes up our world. In outline (says physics-based analysis) nanoscale factory machinery operating on well-understood principles could be used to convert simple chemical compounds into beyond-state-of-the-art products, and do this quickly, cleanly, inexpensively, and with a modest energy cost. If we were better at making things, we could make those machines, and with them we could make the products that would replace the infrastructure that is causing the accelerating and seemingly irreversible problem of climate change.

What sorts of products? Returning to power generation, transportation, and manufacturing, picture roads resurfaced with solar cells (a tough, black film), cars that run on recyclable fuel (sleek, light, and efficient), and car-factories that fit in a garage. We could make these easily, in quantity, if we were good at making things.

Developing the required molecular manufacturing capabilities will require hard but rewarding work on a global scale, converting scientific knowledge into engineering practice to make tools that we can use to make better tools. The aim that physics suggests is a factory technology with machines that assemble large products from parts made of smaller parts (made of smaller parts, and so on) with molecules as the smallest parts, and the smallest machines only a hundred times their size.

The basic science to support this undertaking flourishing, but the engineering has has gotten a slow start, and for a peculiar reason: The idea of using tiny machines to make things has been burdened by an overgrowth of mythology. According to fiction and pop culture, it seems that all tiny machines are robots made of diamond, and they're dangerous magic — smart and able to do almost anything for us, but apt to swarm and multiply and maybe eat everything, probably including your socks.

In the real world, manufacturing does indeed use "robots", but these are immobile machines that work in an assembly line, putting part A in slot B, again and again. They don't eat, they don't get pregnant, and making them smaller wouldn't make them any smarter.

There is a mythology in science, too, but of a more sober sort, not a belief in glittery nanobugs, but a skepticism rooted in mundane misconceptions about whether nanoscale friction and thermal motion will sabotage nanomachines, and whether there are practical steps to take in laboratories today. (No, and yes.) This mythology, by the way, seems regional and generational; I haven't encountered it in Japan, India, Korea, or China, and it is rare among the rising generation of researchers in the U.S.

The U.S. National Academies has issued a report on molecular manufacturing, and it calls for funding experimental research. A roadmap prepared by Battelle with several U.S. National Laboratories has studied paths forward, and suggests research directions. This knowledge will spread, and will change the game.

I should add one more fact about molecular manufacturing and the climate change problem: If we were good at making things, we could make efficient devices able to collect, compress, and store carbon dioxide from the atmosphere, and we could make solar arrays large enough to generate enough power to do this on a scale that matters. A solar array area, that if aggregated, would fit in a corner of Texas, could generate 3 terawatts. In the course of 10 years, 3 terawatts would provide enough energy remove all the excess carbon the human race has added to the atmosphere since the Industrial Revolution began. So far as carbon emissions are concerned, this would fix the problem.

| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |

next >