| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |

next >


STEPHEN H. SCHNEIDER
Biologist; Climatologist, Stanford University; Author, Laboratory Earth

CONSERVING THE CLIMATE: WILL GREENLAND'S MELTING ICE THE DEAL?

Scientists have been talking about the risks of human induced climate changes for decades now in front of places like Congress, scientific conventions, media events, corporate board rooms, and at visible cultural extravaganzas like Live Earth. Yet, a half century after serious scientific concerns surfaced, the world is still far from a meaningful deal to implement actions to curb the threats by controlling the offending emissions.

The reason is obvious: controlling the basic activities that brought us our prosperity—burning fossil fuels—is not going to be embraced by those who benefit from using the atmosphere as a place to dump for free their tailpipe and smokestack effluents, nor will developing economies like China and India easily give up the techniques we used to get rich because of some threat perceived as distant and not yet certain. To be sure there is real action at local, state, national and international levels, but a game changing global deal is still far from likely. Documented impacts like loss of the Inuit hunting culture, small island states survival in the face of inexorable sea level rise, threats of species extinction in critical places like mountain tops, or a five fold increase in wild fires in the US West since 1970 have not been game changin—yet. What might change the game?

In order to give up something—the traditional pathway to wealth, burning coal oil and gas—nations will have to viscerally perceive they are getting something—protection from unacceptably severe impacts. The latter has been difficult to achieve because most scientific assessments are honest that along with many credible and major risks are many remaining uncertainties.

We cannot pin down whether sea levels will rise a few feet or a few meters in the next century or two—the former is nasty but relatively manageable with adaptation investments, the latter would mean abandoning coastline installations or cultures where a sizeable chunk of humanity lives and works. If we could show scientifically that such a threat was likely, it would be game changing in terms of motivating the kinds of compromises required to achieve the actions needed that are currently politically difficult to achieve.

This is where the potential for up to 7 meters of sea level rise stored as ice on Greenland will come in to tip us toward meaningful actions. Already Greenland is apparently melting at an unprecedented rate, and way faster than any of our theories or models predicted. But it can be—and has been—argued it is just a short term fluctuation since large changes in ice volume come and go typically on millennial timescales—though mounting evidence from ice cores says probably there is unprecedented melting going on right now. Another decade or two of such scientifically documented acceleration of melting could indeed imply we will get the unlucky outcome: meters of sea level rise in the time frame of human infrastructure lifetimes for ports and cities—to say nothing of vulnerable natural places like coastal wetlands etc.

Unfortunately, the longer we wait for more confident "proof" of game changing melt rates in Greenland (or West Antarctica as well, where another 5 meters potential sea level rise lurks), the higher the risk of passing a tipping point in which the melting becomes an unstoppable self-driven process. That game change occurrence would force unprecedented retreat from the sea, and a major abandonment or rebuilding of coastal civilization and loss of coastal wetlands. This is a gamble with "Laboratory Earth", that we can't afford to lose.


AUBREY DE GREY
Gerontologist; Chairman & Chief Science Officer. the Methuselah Foundation; Author, Ending Aging

THE UNMASKING OF TRUE HUMAN NATURE

Since I think I have a fair chance of living long enough to see the defeat of aging, it follows that I expect to live long enough to see many momentous scientific and technological developments. Does one such event stand out? Yes and no.

You don't have to be a futurophile, these days, to have heard of "the Singularity". What was once viewed as an oversimplistic extrapolation has now become mainstream: it is almost heterodox in technologically sophisticated circles not to take the view that technological progress will accelerate within the next few decades to a rate that, if not actually infinite, will so far exceed our imagination that it is fruitless to attempt to predict what life will be like thereafter.

Which technologies will dominate this march? Surveying the torrent of literature on this topic, we can with reasonable confidence identify three major areas: software, hardware and wetware. Artificial intelligence researchers will, numerous experts attest, probably build systems that are "recursively self-improving"—that understand their own workings well enough to design improvements to themselves, thereby bootstrapping to a state of ever more unimaginable intellectual performance.

On the hardware side, it is now widely accepted as technically feasible to build structures in which every atom is exactly where we wish it to be. The positioning of each atom will be painstaking, so one might view this as of purely academic interest—if not for the prospect of machines that can build copies of themselves. Such "assemblers" have yet to be completely designed, let alone built, but cellular automata research indicates that the smallest possible assembler is probably quite simple and small. The advent of such devices would rather thoroughly remove the barrier to practicability that arises from the time it takes to place each atom: exponentially accelerating parallelism is not to be sneezed at.

And finally, when it comes to biology, the development of regenerative medicine to a level of comprehensiveness that can give a few extra decades of healthy life to those who are already in middle age will herald a similarly accelerating sequence of refinements—not necessarily accelerating in terms of the rate at which such therapies are improved, but in the rate at which they diminish our risk of succumbing to aging at any age, as I've described using the concept of "longevity escape velocity".

I don't single out one of these areas as dominant. They're all likely to happen, but all have some way to go before their tipping point, so the timeframe for their emergence is highly speculative. Moreover, each of them will hasten the others: superintelligent computers will advance all technological development, molecular machines will surpass enzymes in their medical versatility, and the defeat of our oldest and most implacable foe (aging) will raise our sights to the point where we will pursue other transformative technologies seriously as a society, rather than leaving them to a few rare visionaries. Thus, any of the three—if they don't just wipe us all out, but unlike Martin Rees I personally think that is unlikely—could be "the one".

Or... none of them. And this is where I return to the Singularity. I'll get to human nature soon, fear not.

When I discuss longevity escape velocity, I am fond of highlighting the history of aviation. It took centuries for the designs of da Vinci (who was arguably not even the first) to evolve far enough to become actually functional, and many confident and smart engineers were proven wrong in the meantime. But once the decisive breakthrough was made, progress was rapid and smooth. I claim that this exemplifies a very general difference between fundamental breakthroughs (unpredictable) and incremental refinements (remarkably predictable).

But to make my aviation analogy stick, I of course need to explain the dramatic lack of progress in the past 40 years (since Concorde). Where are our flying cars? My answer is clear: we haven't developed them because we couldn't be bothered, an obstacle that is not likely to occur when it comes to postponing aging. Progress only accelerates while provided with impetus from human motivation. Whether it's national pride, personal greed, or humanitarian concern, something—someone—has to be the engine room.

Which brings me, at last, to human nature. The transformative technologies I have mentioned will, in my view, probably all arrive within the next few decades—a timeframe that I personally expect to see. And we will use them, directly or indirectly, to address all the other slings and arrows that humanity is heir to: biotechnology to combat aging will also combat infections, molecular manufacturing to build unprecedentedly powerful machines will also be able to perform geoengineering and prevent hurricanes and earthquakes and global warming, and superintelligent computers will orchestrate these and other technologies to protect us even from cosmic threats such as asteroids—even, in relatively short order, nearby supernovae. (Seriously.) Moreover, we will use these technologies to address any irritations of which we are not yet even aware, but which grow on us as today's burdens are lifted from our shoulders. Where will it all end?

You may ask why it should end at all—but it will. It is reasonable to conclude, based on the above, that there will come a time when all avenues of technology will, roughly simultaneously, reach the point seen today with aviation: where we are simply not motivated to explore further sophistication in our technology, but prefer to focus on enriching our and each other's lives using the technology that already exists. Progress will still occur, but fitfully and at a decelerating</i> rather than accelerating rate. Humanity will at that point be in a state of complete satisfaction with its condition: complete identity with its deepest goals. Human nature will at last be revealed.


DONALD D. HOFFMAN
Cognitive Scientist, UC, Irvine; Author, Visual Intelligence

THE LAPTOP QUANTUM COMPUTER

Everything will change with the advent of the laptop quantum computer (QC). The transition from PCs to QCs will not merely continue the doubling of computing power, in accord with Moore's Law. It will induce a paradigm shift, both in the power of computing (at least for certain problems) and in the conceptual frameworks we use to understand computation, intelligence, neuroscience, social interactions, and sensory perception.

Today's PCs depend, of course, on quantum mechanics for their proper operation. But their computations do not exploit two computational resources unique to quantum theory: superposition and entanglement. To call them computational resources is already a major conceptual shift. Until recently, superposition and entanglement have been regarded primarily as mathematically well-defined by psychologically incomprehensible oddities of the quantum world—fodder for interminable and apparently unfruitful philosophical debate. But they turn out to be more than idle curiosities. They are bona fide computational resources that can solve certain problems that are intractable with classical computers. The best known example is Peter Shor's quantum algorithm which can, in principle, break encryptions that are impenetrable to classical algorithms.

The issue is the "in principle" part. Quantum theory is well established and quantum computation, although a relatively young discipline, has an impressive array of algorithms that can in principle run circles around classical algorithms on several important problems. But what about in practice? Not yet, and not by a long shot. There are formidable materials-science problems that must be solved—such as instantiating quantum bits (qubits) and quantum gates, and avoiding an unwanted noise called decoherence—before the promise of quantum computation can be fulfilled by tangible quantum computers. Many experts bet the problems can't adequately be solved. I think this bet is premature. We will have laptop QCs, and they will transform our world.

When laptop QCs become commonplace, they will naturally lead us to rethink the notion of intelligence. At present, intelligence is modeled by computations, sometimes simple and sometimes complex, that allow a system to learn, often by interacting with its environment, how to plan, reason, generalize and act to achieve goals. The computations might be serial or parallel, but they have heretofore been taken to be classical.

One hallmark of a classical computation is that it can be traced, i.e., one can in principle observe the states of all the variables at each step of the computation. This is helpful for debugging. But one hallmark of quantum computations is that they cannot in general be traced. Once the qubits have been initialized and the computation started, you cannot observe intermediate stages of the computation without destroying it. You aren't allowed to peak at a quantum computation while it is in progress.

The full horsepower of a quantum computation is only unleashed when, so to speak, you don't look. This is jarring. It clashes with our classical way of thinking about computation. It also clashes with our classical notion of intelligence. In the quantum realm, intelligence happens when you don't look. Insist on looking, and you destroy this intelligence. We will be forced to reconsider what we mean by intelligence in light of quantum computation. In the process we might find new conceptual tools for understanding those creative insights that seem to come from the blue, i.e., whose origin and development can't seem to be traced.

Laptop QCs will make us rethink neuroscience. A few decades ago we peered inside brains and saw complex telephone switch boards. Now we peer inside brains and see complex classical computations, both serial and parallel. What will see once we have thoroughly absorbed the mind set of quantum computation? Some say we will still find only classical computations, because the brain and its neurons are too massive for quantum effects to survive. But evolution by natural selection leads to surprising adaptations, and there might in fact be selective pressures toward quantum computations.

One case in point arises in a classic problem of social interaction: the prisoner's dilemma. In one version of this dilemma, someone yells "FIre!" in a crowded theater. Each person in the crowd has a choice. They can cooperate with everyone else, by exiting in turn in an orderly fashion. Or they can defect, and bolt for the exit. Everyone cooperating would be best for the whole crowd; it is a so-called Pareto optimal solution. But defecting is best for each individual; it is a so-called Nash equilibrium.

What happens is that everyone defects, and the crowd as a whole suffers. But this problem of the prisoner's dilemma, viz., that the Nash equilibrium is not Pareto optimal, is an artifact of the classical computational approach to the dilemma. There are quantum strategies, involving superpositions of cooperation and defection, for which the Nash equilibrium is Pareto optimal. In other words, the prisoner's dilemma can be resolved, and the crowd as a whole needn't suffer if quantum strategies are available. If the prisoner's dilemma is played out in an evolutionary context, there are quantum strategies that drive all classical strategies to extinction. This is suggestive. Could there be selective pressures that built quantum strategies into our nervous systems, and into our social interactions? Do such strategies provide an alternative way to rethink the notion of altruism, perhaps as a superposition of cooperation and defection?

Laptop QCs will alter our view of sensory perception. Superposition seems to be telling us that our sensory representations, which carve the world into discrete objects with properties such as position and momentum, simply are an inadequate description of reality: No definite position or momentum can be ascribed to, say, an electron when it is not being observed. Entanglement seems to be telling us that the very act of carving the world into discrete objects is an inadequate description of reality: Two electrons, billions of light years apart in our sensory representations, are in fact intimately and instantly linked as a single entity.

When superposition and entanglement cease to be abstract curiosities, and become computational resources indispensable to the function of our laptops, they will transform our understanding of perception and of the relation between perception and reality.


James J. O'Donnell
Classicist; Provost, Georgetown University; Author, The Ruin of the Roman Empire

Africa

"Africa" is the short answer to this question. But it needs explanation.

Historians can't predict black swan game-changers any better than economists can. An outbreak of plague, a nuclear holocaust, an asteroid on collision course, or just an unassassinated pinchbeck dictator at the helm of a giant military machine—any of those can have transformative effect and will always come as a surprise.

But at a macro level, it's easier to see futures, just hard to time them. The expansion of what my colleague, the great environmental historian John McNeill, calls "the human web" to build a planet-wide network of interdependent societies is simply inevitable, but it's taken a long time. Rome, Persia, and ancient China built a network of empires stretching from Atlantic to Pacific, but never made fruitful contact with each other and their empire-based model of "globalization" fell apart in late antique times. A religion-based model kicked in then, with Christianity and Islam taking their swings: those were surprising developments, but they only went so far.

It took until early modern times and the development of new technologies for a real "world-wide web" of societies to develop. Even then, development was Euro-centric for a very long time. Now in our time, we've seen one great game-changer. In the last two decades, the Euro-centric model of economic and social development has been swamped by the sudden rise of the great emerging market nations: China, India, Brazil, and many smaller ones. The great hope of my youth—that "foreign aid" would help the poor nations bootstrap themselves—has come true, sometimes to our thinly-veiled disappointment: disappointment because we suddenly find ourselves competed with for steel and oil and other resources, suddenly find our products competed with by other economies' output, and wonder if we really wanted that game to change after all. The slump we're in now is the inevitable second phase of that expansion of the world community, and the rise that will follow is the inevitable third—and we all hope it comes quickly.

But a great reservoir or misery and possibility awaits: Africa. Humankind's first continent and homeland has been relegated for too long to disease, poverty, and sometimes astonishingly bad government. There is real progress in many places, but astonishing failures persist. That can't last. The final question facing humankind's historical development is whether we can bring the whole human family, including Africa's billion, can all achieve together sustainable levels of health and comfort.

When will we know? That's a scary question. One future timeline has us peaking now and subsiding, as we wrestle with the challenges we have made for ourselves, into some long period of not-quite-success, while Africa and the failed states of other continents linger in waiting for—what? Decades? Centuries? There are no guarantees about the future. But as we think about the financial crises of the present, we have to remember that what is at risk is not merely the comfort and prosperity of the rich nations but the very lives and opportunity for the poorest.


GREGORY BENFORD
Novelist; Co-founder & Chairman, Genescient' Author, The Sunborn

LIVE TO 150

I expect to see this happen, because I'll be living longer. Maybe even to 150, about 30 more years than any human is known to have lived.

I expect this because I've worked on it, seen the consequences of genomics when applied to the complex problem of our aging.

Since Aristotle, many scientists and even some physicians (who should know better) thought that aging arises from a few mechanisms that make our bodies deteriorate. Instead, the genomic revolution of the last decade now promises a true 21st Century path to extending longevity: follow the pathways.

Genomics now reveals what physicians intuited: the staggering complexity of aging pathophysiology among real clinical patients. We can't solve "the aging problem" using the standard research methods of cell biology, despite the great success such methods had with some other medical problems.

Aging is not a process of deterioration actively built by natural selection. Instead it arises from a lack of such natural selection in later adulthood. Not understanding this explains the age-old failures to explain or control aging and the chronic diseases underlying it.

Aging comes from multiple genetic deficiencies, not a single biochemical problem.

But now we have genomics to reveal all the genes in an organism. More, we can monitor how each and every one of them expresses in our bodies. Genomics, working with geriatric pathology, now unveils the intricate problems of coordination among aging organ systems. Population genetics illuminates aging's cause and so, soon enough, its control. Aging arises from interconnected complexity hundreds of times greater than cell biologists thought before the late 1990s.

The many-headed monster of aging can't be stopped by any vaccine or by supplying a single missing enzyme. There are no "master regulatory" genes, or avenues of accumulating damage. Instead, there any complex pathways that inevitably trade current performance for longterm decay. Eventually that evolutionary strategy catches up with us.

So the aging riddle is inherently genomic in scale. There is no biochemical or cellular necessity to aging—it arises from side effects of evolution, through natural selection. But this also means we can attack it by using directed evolution.

Michael Rose at UC Irvine has produced "Methuselah flies" that live over four times longer than control flies in the lab. He did this by not allowing their eggs to hatch, until half are dead, for hundreds of generations. Methuselah flies are more robust, not less, and so resist stress.

Methuselah flies genomics shows us densely overlapping pathways. Directed evolution uses these to enhance longevity. Since flies have about ¾ of their genes in common with us, this tells us much about our own pathways. We now know many of these pathways and can enhance their resistance to the many disorders of aging.

By finding substances that can enhance the action of those pathways, we have a 21st Century approach to aging. Such research is rapidly ongoing in private companies, including one I co-founded only three years ago. The field is moving fast. The genomic revolution makes the use of multi-pathway treatments to offset aging inevitable.

Knowledge comes first, then its use. Science yields engineering. Already there seems no fundamental reason why we cannot live to 150 years or longer. After all, nature has done quite well on her own. We know of a 4,800-year-old bristlecone pine, a 400 year old clam—plus whales, a tortoise and koi fish over 200 years old—all without technology. After all, these organisms use pathways we share, and can now understand.

It will take decades to find the many ways of acting on the longevity genes we already know. Nature spent several billion years developing these pathways; we must plumb them with smart modern tools. The technology emerging now acts on these basic pathways to immediately effect all types of organs. Traditionally, medicine focuses on disease by isolating and studying organs. Fair enough, for then. Now it is better to focus on entire organisms. Only genomics can do this. It looks at the entire picture.

Quite soon, simple pills containing designer supplements will target our most common disorders — cardiovascular, diabetes, neurological. Beyond that, the era of affordable, personal genomics makes possible designer supplements, now called neutrigenomics. Tailored to each personal genome, these can enforce the repair mechanisms and augmentations that nature herself provided to the genomically fortunate.

So…what if it works?

The prospect of steadily extending our lifespans terrifies some governments. These will yield, over time, to pressures to let us work longer—certainly far beyond the 65 years imposed by most European Union countries. Slowly it will dawn that vibrant old age is a boon, not a curse.

Living to 150 ensures that you take the long view. You're going to live in a future ecology, so better be sure it's livable. You'll need longterm investments, so think longterm. Social problems will belong to you, not some distant others, because problems evolve and you'll be around to see them.

Rather than isolating people, "old age" will lead to social growth. With robust health to go with longer lives, the older will become more socially responsible, bringing both experience and steady energy to bear.

We need fear no senioropolis of caution and withdrawal. Once society realizes that people who get educated in 20 years can use that education for another century or so, working well beyond 100, all the 20th Century social agenda vanishes. Nobody will retire at 65. People will switch careers, try out their dreams, perhaps find new mates and passions. We will see that experience can damp the ardent passions of glib youth, if it has a healthy body to work through. That future will be more mature, and richer for it.

All this social promise emerges from the genomic revolution. The 21st Century has scarcely begun, and already it looks as though most who welcomed it in will see it out–happily, after a good swim in the morning and a vigorous party that night, to welcome in the 22nd. The first person to live to 150 may be reading this right now.


STEVE NADIS
Science writer; Contributing Editor, Astronomy Magazine

DISCOVERING ANOTHER UNIVERSE IN OUR UNIVERSE

What would change everything? Well, if you think of the universe as everything, then something that changes the universe—or at least changes our whole conception of it—would change everything. So I think I’ll go with the universe (which is generally a safe pick when you want to cover all bases). I just have to figure out the thing that’s changing. And the biggest, most dramatic thing I can think of would be discovering another universe in our universe.

Now what exactly does that mean? To some extent, it comes down to definitions. If you define the universe as “all there is,” then the idea of discovering another universe doesn’t really make sense. But there are other ways of picturing this. And the way many cosmologists view it is that our universe is, in fact, an expanding bubble--an honest-to-god bubble with a wall and everything. Not so different from a soap bubble really, except for its size and longevity. For this bubble has kept it together for billions of years. And as viewed from the inside, it appears infinitely large. Even so, there’s still room for other bubbles out there--an infinite number of them--and they could appear infinitely large too.

I guess the picture I’m painting here has lots of bubbles. And it’s not necessarily wrong to think of them as different universes, because they could be made of entirely different stuff that obeys different physical laws and sits at a different general energy level (or vacuum state) than our bubble. The fact is, we can never see all of our own bubble, or even see its edge, let alone see another bubble that might be floating outside. We can only see as far as light will take us, and right now that’s about 13.7 billion light-years, which means we only get to observe a small portion of our bubble and nothing more. That’s why it’s fair to consider a bubble outside ours as a universe unto itself. It could be out there, just as real as ours, and we’ll never have any prospect of knowing about it. Unless, perchance, it makes a dramatic entrance into our world by summarily crashing into us.

This sounds like the stuff of fantasy, and it may well be, but I’m not just making it up. Because one of our leading theories in cosmology called inflation predicts—at least in some versions—that our bubble universe will eventually experience an infinite number of collisions with other bubble universes. The first question one might ask is could we withstand such a crash and live to tell about it? The small number of physicists and cosmologists who’ve explored this issue have concluded that in many cases we would survive, protected to some extent by the vastness of our bubble and its prodigious wall.

The next question to consider is whether we could ever see traces of such a collision? There’s no definitive answer to that yet, and until we detect the imprint of another bubble we won’t know for sure. But theorists have some pretty specific ideas of what we might see—namely, disk-shaped features lurking somewhere amidst the fading glow of Big Bang radiation known as the cosmic microwave background. And if we were to look at such a disk in gravitational waves, rather than in electromagnetic waves (which we should be able to do in the near future), we might even see it glow.

The probability of seeing a disk of this nature is hard to assess because it appears to be the product of three numbers whose values we can only guess at. One of those numbers has to do with the rate at which other bubbles are forming. The other two numbers have to do with the rate at which space is expanding both inside and outside our bubble. Since we don’t know how to get all these numbers by direct measurements, there doesn’t seem to be much hope of refining that calculation in the near-term. So our best bet, for now, may be trying to obtain a clearer sense of the possible observational signatures and then going out and looking. The good news is that we won’t need any new observatories in the sky. We can just sift through the available cosmic microwave data, which gets better every year, and see what turns up.

If we find another universe, I’m not sure exactly what that means. The one thing I do know is that it’s big. It should be of interest to everybody, though it will undoubtedly mean different things to different folks. One thing that I think most people will agree on is that the place we once called the universe is even grander and more complex than we ever imagined.


BARRY SMITH
Director, Institute of Philosophy, School of Advanced Study, University of London

Little Changes Make the Biggest Difference

Despite the inevitable decline in the environment brought by climate change, the advance of technology will steadily continue. Many pin their hopes on technological advances to lessen the worst effects of climactic upheaval and to smooth the transition between our dependence on fossil fuels and our eventual reliance on renewable energy sources. However, bit by bit, less dramatic advances in technology will take place, changing the world, and our experience of it, for ever.

It is tempting when thinking about developments that will bring fundamental change to look to the recent past. We think of the Internet and the cell phone. To lose contact with the former, even temporarily, can make one feel that one is suddenly stripped of a sense, like the temporary lose of one’s sight or hearing; while the ready supply of mobile phone technology has stimulated the demand to communicate. Why be alone anywhere? You can always summon someone’s company? Neither of these technologies is yet optimal, and either we, or they, will have to adapt to one another. The familiar refrain is that email increases our workload and that cell phones put us at the end of the electronic leash. Email can also be a surprisingly inflammatory medium, and cell phones can separate us from our surroundings, leaving us uneasy with these technologies. Can’t live with them, can’t live without them. So can future technology help, or is it we who will adapt?

Workers in A.I. used to dream of the talking typewriter and this is ever closer closer to being an everyday reality. Why write emails when you can dictate them? Why read them when you can listen to them being read to you, and do something else? And why not edit as you go, to speed up the act of replying? All this will come one day, no doubt, perhaps with emails being read in the personalized voice patterns of their senders. Will this cut down on the surprisingly inflammatory and provocative nature of email exchanges? Perhaps not.

However, the other indispensable device for communicating, the cell phone, is far from adaptive. We hear, unwanted, other people’s conversations. We lose our inhibitions and our awareness of our surroundings while straining to capture the nuances of the other’s speech; listing out for the subtle speech signals that convey mood and meaning, many of which are simply missing in this medium. Maybe this is why speakers are more ampliative on their cell phones, implicitly aware that less of them comes across. Face to face our attention is focused on many features of the talker. It is this multi-modal experience that can simultaneously provide so much. Without these cross-modal clues, we make a concentrated effort to tune in to what is happening elsewhere, often with dangerous consequences, as happens when drivers lose the keen awareness of their surroundings — even when using hands free sets. Could technology overcome these problems?

Here, I am reminded not of the recent past but of a huge change that occurred in the middle-ages when humans transformed their cognitive lives by learning to read silently. Originally, people could only read books by reading each page out loud. Monks would whisper, of course, but the dedicated reading by so many in an enclosed space must have been an highly distracting affair. It was St Aquinas who amazed his fellow believers by demonstrating that without pronouncing words he could retain the information he found on the page. At the time, his skill was seen as a miracle, but gradually human readers learned to read by keeping things inside and not saying the words they were reading out loud. From this simple adjustment, seemingly miraculous at the time, a great transformation of the human mind took place, and so began the age of intense private study so familiar to us now; whose universities where ideas could turn silently in large minds.

Will a similar transformation of the human mind come about in our time? Could there come a time when we intend to communicate and do so without talking out loud? If the answer is ‘yes’ a quiet public space would be restored where all could engage in their private conversations without disturbing others. Could it really happen? Recently, we have been amazed by how a chimpanzee running on a treadmill could control—for a short time—the movements of a synchronized robot running on a treadmill thousands of miles away. Here, we would need something subtly different but no less astounding: a way of controlling in thought, and committing to send, the signals in the motor cortex that would normally travel to our articulators and ultimately issue in speech sounds. A device, perhaps implanted or appended, would send the signals and another device in receivers would read them and stimulate similar movements or commands in their motor cortex, giving them the ability, through neural mimicry, to reproduce silently the speech sounds they would make if they were saying them. Could accent be retained? Maybe not, unless some way was found of coding separately, but usably, the information voice conveys about the size, age and sex of the speaker. However, knowing who was calling and knowing how they sounded may lead us to ‘hear’ their voice with the words understood.

Whether this could be done depends, in part, on whether Lieberman’s Motor Theory of Speech Perception is true, and it may well not be. However, a break-though of this kind, introducing such a little change as our not having to speak out loud or having to listen attentively to sounds when communicating, would allow us to share our thoughts efficiently and privately. Moreover, just as thinking distracts us less from our surroundings than listening attentively to sounds originating elsewhere, perhaps one could both communicate and concentrate on one’s surroundings, whether that be driving, or just negotiating our place among other people. It would not be telepathy, the reading of minds, or the invasion of thought, since it would still depend on senders and receivers with the appropriate apparatus being willing to send to, and receive from, one another. We would still have to dial and answer.

Would it come to feel as if one were exchanging thoughts directly? Perhaps. And maybe it would become the preferred way of communicating in public. And odd as this may sound to us, I suspect the experience of taking-in the thoughts of others when reading a manuscript silently was once just as strange to those early Medieval scholars. These are changes in experience that transform our minds, giving us the ability to be (notionally) in two places at once. It is these small changes in how we utilize our minds that may ultimately have the biggest effects on our lives.


Susan Blackmore
Psychologist; Author, Consciousness: An Introduction

ARTIFICIAL, SELF-REPLICATING MEME MACHINES

All around us the techno-memes are proliferating, and gearing up to take control; not that they realise it; they are just selfish replicators doing what selfish replicators do—getting copied whenever and wherever they can, regardless of the consequences. In this case they are using us human meme machines as their first stage copying machinery, until something better comes along. Artificial meme machines are improving all the time, and the step that will change everything is when these machines become self-replicating. Then they will no longer need us. Whether we live or die, or whether the planet is habitable for us or not, will be of no consequence for their further evolution.

I like to think of our planet as one in a million, or one in a trillion, of possible planets where evolution begins. This requires something (a replicator) that can be copied with variation, and selection. As Darwin realised, if more copies are made than can survive, then the survivors will pass on to the next generation of copying whatever helped them get through. This is how all design in the universe comes about.

What is not so often thought about is that one replicator can piggy-back on another by using its vehicles as copying machinery. This has happened here on earth. The first major replicator (the only one for most of earth’s existence and still the most prevalent) is genes. Plants and animals are gene machines—physical vehicles that carry genetic information around, and compete to protect and propagate it. But something happened here on earth that changed everything. One of these gene vehicles, a bipedal ape, became capable of imitation.

Imitation is a kind of copying. The apes copied actions and sounds, and made new variations and combinations of old actions and sounds, and so they let loose a new replicator—memes. After just a few million years the original apes were transformed, gaining enormous brains, dexterous hands, and redesigned throats and chests, to copy more sounds and actions more accurately. They had become meme machines.

We have no idea whether there are any other two-replicator planets out there in the universe because they wouldn’t be able to tell us. What we do know is that our planet is now in the throes of gaining a third replicator—the step that would allow interplanetary communication.

The process began slowly and speeded up, as evolutionary processes tend to do. Marks on clay preserved verbal memes and allowed more people to see and copy them. Printing meant higher copying fidelity and more copies. Railways and roads spread the copies more widely and people all over the planet clamoured for them. Computers increased both the numbers of copies and their fidelity. The way this is usually imagined is a process of human ingenuity creating wonderful technology as tools for human benefit, and with us in control. This is a frighteningly anthropocentric way of thinking about what is happening. Look at it this way:

Printing presses, rail networks, telephones and photocopiers were among early artificial meme machines, but they only carried out one or two of the three steps of the evolutionary algorithm. For example, books store memes and printing presses copy them, but humans still do the varying (i.e. writing the books by combining words in new ways), and the selecting (by choosing which books to buy, to read, or to reprint). Mobile phones store and transmit memes over long distances, but humans still vary and select the memes. Even with the Internet most of the selection is still being done by humans, but this is changing fast. As we old-fashioned, squishy, living meme machines have become overwhelmed with memes we are happily allowing search engines and other software to take over the final process of selection as well.

Have we inadvertently let loose a third replicator that is piggy-backing on human memes? I think we have. The information these machines copy is not human speech or actions; it is digital information competing for space in giant servers and electronic networks, copied by extremely high fidelity electronic processes. I think that once all three processes of copying, varying and selecting are done by these machines then a new replicator has truly arrived. We might call these level-three replicators “temes” (technological-memes) or “tremes” (tertiary memes). Whatever we call them, they and their copying machinery are here now. We thought we were creating clever tools for our own benefit, but in fact we were being used by blind and inevitable evolutionary processes as a stepping stone to the next level of evolution.

When memes coevolved with genes they turned gene machines into meme machines. Temes are now turning us into teme machines. Many people work all day copying and transmitting temes. Human children learn to read very young—a wholly unnatural process that we’ve just got used to—and people are beginning to accept cognitive enhancing drugs, sleep reducing drugs, and even electronic implants to enhance their teme-handling abilities. We go on thinking that we are in control, but looked at from the temes’ point of view we are just willing helpers in their evolution.

So what is the step that will change everything? At the moment temes still need us to build their machines, and to run the power stations, just as genes needed human bodies to copy them and provide their energy. But we humans are fragile, dim, low quality copying machines, and we need a healthy planet with the right climate and the right food to survive. The next step is when the machines we thought we created become self-replicating. This may happen first with nano-technology, or it may evolve from servers and large teme machines being given their own power supplies and the capacity to repair themselves.

Then we would become dispensable. That really would change everything.


Kenneth W. Ford
Retired Physicist & Writer; Coauthor (with John Archibald Wheeler), Geons, Black Holes, and Quantum Foam: A Life in Physics, and Quantum Foam: A Life in Physics

Reading Minds

Not in my lifetime, but someday, somewhere, some team will figure out how to read your thoughts from the signals emitted by your brain. This is not in the same league as human teleportation—theoretically possible, but in truth fictional. Mind reading is, it seems to me, quite likely.

And, as we know from hard disks and flash memories, to be able to read is to be able to write. Thoughts will be implantable.

Some will applaud the development. After all, it will aid the absent minded, enable the mute to communicate, preempt terrorism and crime, and conceivably aid psychiatry. (It will also cut down on texting and provide as reliable a staple for cartoonists as the desert island and the bed.) Some will, quite rightly, deplore it. It will be the ultimate invasion of privacy.

Game-changing indeed. If we choose to play the game. Until about forty years ago, we lived in the "If it is technically feasible, it will happen" era. Now we are in the "If it is technically feasible, we can choose" era. An important moment was the decision in the United States in 1971 not to develop a supersonic transport. An American SST would hardly have been game-changing, but the decision not to build it was a watershed moment in the history of technology. Of course, since then—if I may offer up my own opinions—we should have said no to the International Space Station but didn't, and we should have said yes to the Superconducting Super Collider but didn't. Our skill in choosing needs refinement.

As what is technically feasible proliferates in its complexity, cost, and impact on humankind, we should more often ask the question, "Should we do it?" Take mind reading. We can probably safely assume that the needed device would have to be located close to the brain being read. That would mean that choice is possible. We could let Mind Reader™, Inc. make and market it. Or we could outlaw it. Or we could hold it as an option for special circumstances (much as we now try to do with wiretapping). What we will not have to do is throw up our hands and say, "Since it can be done, it will be done."

I like being able to keep some of my thoughts to myself, and I hope that my descendants will have the same option.


Ernst PÖppel
Neuroscientist, Chairman, Human Science Center and Department of Medical Psychology, Munich University; Author, Mindworks

Future as present. A final experiment

When time came to an end, the gods decided to run a final experiment. They wanted to be prepared after the big crunch for potential trajectories of life after the next big bang. For their experiment they choose two planets in the universe where evolution had resulted in similar developments of life. For planet ONE they decided to interfer with evolution by allowing only ONE species to develop their brain to a high level of complexity. This species referred to itself as being „intelligent“; members of this species were very proud about their achievements in science, technology, the arts or philosophy.

For planet TWO the gods altered just one variable. For this planet they allowed that TWO species with high intelligence would develop. The two species shared the same environment, but—and this was crucial for the divine experiment—they did not communicate directly with each other. Direct communication was limited to their own species only. Thus, one species could not inform directly the other one about future plans; each species could only register what has happened to their common environment.

The question was how life would be managed on planet ONE and on planet TWO. As for any organism, the goal was on both planets to maintain an internal balance or homeostasis by using optimally the available resources. As long as the members of the different samples were not too intelligent stability was maintained. However, when they became more intelligent and according to their own view really smart, and when the frame of judgment changed, i.e. individual interests became dominant, trouble was preprogrammed. Being driven by uncontrolled personal greed, more resources were drawn form the environment than could be replaced. Which planet would do better with such species of too much intelligence to maintain the conditions of life?

Data analysis after the experimental period of 200 years showed that planet TWO did much better to maintain stability of the environment. Why this? The species on planet TWO had to monitor always the consequences of actions of the other species. If one would take too many resources for individual satisfaction, sanctions by the other species would be the consequence. Thus, drawing resources from the environment was controlled by the other species in a bi-directional way resulting in a dynamic equilibrium.

When the gods published their results, they drew the following conclusions: Long-term stability in complex systems like in social systems with members of too much intelligence can be maintained if two complementary systems interact with each other. In case only one system like on planet ONE has been developed it is recommended to adopt for regulative purposes a second system. For social systems it should be the next generation. Their future environment should be made present both conceptually and emotionally. By doing so long-term stabillity is guaranteed.

Being good brain scientists the gods knew that making the future present is not only a matter of abstract or explicit knowledge. This is necessary but not sufficient for action resulting in a long-term equilibrium. Decisions have to be anchored in the emotional systems as well, i.e. an empathic relationship between the members of the two systems has to be developed. If the future becomes present, it can future be a present.


| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |

next >