| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 |

next >

Psychologist and Biologist, Harvard University: Author, Moral Minds


Science fiction writers traffic in a world that tries on possible worlds. What if, as in the Hollywood blockbuster Minority Report, we could read people's intentions before they act and thus preempt violence? An intentionality detector would be a terrific device to have, but talk about ethical nightmares. If you ever worried about big brother tapping your phone lines, how about tapping your neural lines? What about aliens from another planet? What will they look like? How do they reproduce? How do they solve problems? If you want to find out, just go back and watch reruns of Star Trek, or get out the popcorn and watch Men In Black, War of the Worlds, The Thing, Signs, and The Blob.

But here's the rub on science fiction: it's all basically the same stuff, one gimmick with a small twist. Look at all the aliens in these movies. They are always the same, a bit wispy, often with oversized heads, see through body parts, and with awesome powers. And surprisingly, this is how it has been for 75 or so years of Hollywood, even though our technologies have greatly expanded the range of special effects that are possible. Why the lack of creativity? Why such a poverty of the imagination?

The answer is simple, and reveals a deep fact about our biology, and the biology of all other organisms. The brain, as a physical device, evolved to process information and make predictions about the future. Though the generative capacity of the brain, especially the human brain, is spectacular — providing us with a system for massive creativity, it is also highly constrained. The constraints arise from both the physics of brain operation, as well as the requirements of learnability.

These constraints establish what we, and other organisms have achieved — the actual — and what they could, in the future and with the right conditions, potentially achieve — the possible. Where things get interesting is in thinking about the unimaginable. Poof! But there is a different way of thinking about this problem that takes advantage of exciting new developments in molecular biology, evolutionary developmental biology, morphology, neurobiology, and linguistics. In a nutshell, for the first time we have a science that enables us to understand the actual, the possible and the unimaginable, a landscape that will forever change our understanding of what it means to be human, including how we arrived at our current point in evolutionary theory, and where might end up in ten or ten million years.

To illustrate, consider a simple example from the field of theoretical morphology, a discipline that aims to map out the space of possible morphologies and in so doing, reveal not only why some parts of this space were never explored, but also why they never could be explored. The example concerns an extinct group of animals called the ammonoids, swimming cephalopod mollusks with a shell that spirals out from the center before opening up.

In looking at the structure of their shells — the ones that actually evolved that is — there are two relevant dimensions that account for the variation: the rate at which the spiral spirals out and the distance between the center of this coil or spiral and the opening. If you plot the actual ammonoid species on a graph that includes spiral rate and distance to the opening, you see a density of animals in a few areas, and then some gaps. The occupied spaces in this map show what actually evolved, whereas the vacant spaces suggest either possible (not yet evolved) or impossible morphologies.

Of great interest in this line of research is the cause of the impossible. Why, that is, have certain species never taken over a particular swath of morphological turf? What is it about this space that leaves it vacant? Skipping many details, some of the causes are intrinsic to the organisms (e.g., no genetic material or developmental programs for building wheels instead of legs) and some extrinsic (e.g., circles represent an impossible geometry or natural habitats would never support wheels).

What is exciting about these ideas is that they have a family resemblance to those that Noam Chomsky mapped out over 50 years ago in linguistics. That is, the biology that allows us to acquire a range of possible languages, also puts constraints on this system, leaving in its wake a space of impossible languages, those that could never be acquired or if acquired, would never remain stable. And the same moves can be translated into other domains of cultural expression, including music, morality, and mathematics. Are there musical scores that no one, not even John Cage, could dream up because the mind can't fathom certain frequencies and temporal arrangements? Are there evolvable moral systems that we will never see because our current social systems and environments make these toxic to our moral sensibilities? Regardless of how these questions are resolved, they open up new research opportunities, using methods that are only now being refined.

Here are some of my favorites, examples that reveal how we can extend the range of the possible, invading into the terra incognita of the impossible. Thanks to work by neuroscientists such as Evan Balaban, we now know that we can combine the brain parts of different animals to create chimeras. For example, we can take the a part of a quail's brain and pop it into a chicken and when the young chick develops, it head bobs like a quail and crows like a chicken.

Functionally, we have allowed the chicken to invade an empty space of behavior, something unimaginable, to a chicken that is. Now let your imagination run wild. What would a chimpanzee do with the generative machinery that a human has when it is running computations in language, mathematics and music? Could it imagine the previously unimaginable? What if we gave a genius like Einstein the key components that made Bach a different kind of genius? Could Einstein now imagine different dimensions of musicality? These very same neural manipulations are now even possible at the genetic level. Genetic engineering allows us to insert genes from one species into another, or manipulate the expressive range of a gene, jazzing it up or turning it off.

This revolutionary science is here, and it will forever change how we think. It will change what is possible, potentially remove what is possible but deleterious, and open our eyes to the previously impossible.

Roboticist, on leave from MIT, co-founder of iRobot, CTO and Chairman of Heartland Robotic, author, Flesh and Machines


I am very sure that in my lifetime we will have a definitive answer to one question that has been debated, with little data, for hundreds of years. The answer as to whether or not there is life on Mars will either be a null result if negative, or it will profoundly impact science (and perhaps philosophy and religion) if positive.

As 90's Administrator of NASA Dan Goldin rightly reasoned the biggest possible positive public relations coup for his agency, and therefore for its continued budget, would be if it discovered unambiguous evidence of life somewhere elsewhere in the Universe, besides on Earth.

One of the legacies we see today of that judgment is the almost weekly flow of new planets being discovered orbiting nearby stars. If life does exist outside of our solar system the easy bet is that it exists on planets, so we better find planets to look at for direct evidence of life. We have been able to infer the existence of very large planets by carefully measuring star wobbles, and more recently we have detected smaller planets by measuring their occultations, the way they dim a star as they cross between it and Earth. And just in the last months of 2008 we have our first direct images of planets orbiting other stars.

NASA has an ambitious program using the Hubble and Spitzer space telescopes and the 2016 launch of the Terrestial Planet Finder to get higher and higher resolution images of extra-solar planets and look for tell-tale chemical signatures of large scale biochemical impact of Earth-like life on these planets. If we do indeed discover life elsewhere through these methods it will have an large impact on our views of the life, and will no doubt stimulate much creative thinking which will lead to new science about Earth-life. But it will take a long, long, time to infer many details about the nature of that distant life and the detailed levels of similarities and differences to our own versions of life.

The second of Goldin's legacies is about life much closer to home. NASA has a strong, but somewhat endangered at this moment, direct exploration program for the surface of Mars. We have not yet found direct evidence of life there, but neither have the options for its existence narrowed appreciably. And we are very rapidly learning much more about likely locations for life; again just in the last months of 2008 we have discovered vast water glaciers with just a shallow covering of soil. We have many more exciting places to go look for life on Mars than we will be able to send probes over the next handful of years. If we do discover life on Mars (alive or extinct) one can be sure that there will be a flurry of missions to go and examine the living entities or the remnants in great detail.

There are a range of possible outcomes for what life might look like on Mars, and it may leave ambiguity of whether its creation was a spontaneous event independent of that on Earth or whether there has been cross contamination of our two planets with only one genesis for life.

At one extreme life on Mars could turn out to be DNA-based with exactly the same coding scheme for amino acids that all life on Earth uses. Or it could look like a precursor to Earth life, again sharing a compatible precursor encoding, perhaps an RNA-based life form, or even an PNA-based (Peptide Nucleic Acid) form. Any of these outcomes would help us immensely in our understanding of the development of life from non-life, whether it happened on Mars or Earth.

Another set of possibilities for what we might discover would be one of these same forms with a different or incompatible encoding for amino acids. That would be a far more radical outcome. It would tell us two things. Life arose twice, spontaneously and separately, on two adjacent planets in one particular little solar system. The Universe must in that case be absolutely teeming with life. But more than that it would say that the space of possible life biochemistries is probably rather narrow, so we will immediately know a lot about all those other life forms out there. And it will inform us about the probable spaces that we should be searching in our synthetic biology efforts to build new life forms.

The most mind expanding outcome would be if life on Mars is not at all based on a genetic coding scheme of long chains of purine bases that decode in triples to select an amino acid to be tacked on to a protein under construction. This would revolutionize our understanding of the possibilities for biology. It would provide us with a completely different form to study. It would open the possibilities for what must be invariant in biology and what can be manipulated and engineered. It would completely change our understanding of ourselves and our Universe.

Appleton Professor of Natural Philosophy, Dartmouth College; Author, The Prophet and the Astronomer: Apocalyptic Science and the End of the World


There is no question more fundamental to us than our mortality. We die and we know it. It is a terrifying, inexorable truth, one of the few absolute truths we can count on. Other noteworthy absolute truths tend to be mathematical, such as in 2+2=4. Nothing horrified the French philosopher and mathematician Blaise Pascal more than "the silence of infinitely open spaces," the nothingness that surrounds the end of time and our ignorance of it.

For death is the end of time, the end of experience. Even if you are religious and believe in an afterlife, things certainly are different then: either you exist in a timeless Paradise (or Hell), or as some reincarnate soul. If you are not religious, death is the end of consciousness. And with consciousness goes the end of tasting a good meal, reading a good book, watching a pretty sunset, having sex, loving someone. Pretty grim in either case.

We only exist while people remember us. I think of my great-grandparents in nineteenth-century Ukraine. Who were they? No writings, no photos, nothing. Just their genes remain, diluted, in our current generation.

What to do? We spread our genes, write books and essays, prove theorems, invent family recipes, compose poems and symphonies, paint and sculpt, anything to create some sort of permanence, something to defy oblivion. Can modern science do better? Can we contemplate a future when we control mortality? I know I am being way too optimistic considering this a possibility, but the temptation to speculate is far too great for me to pass on it. Maybe I'll live for 101 years like Irving Berlin, having still half of my life ahead of me.

I can think of two ways in which mortality can be tamed. One at the cellular level and the other through an integration of body with genetic, cognitive sciences, and cyber technology. I'm sure there are others. But first, let me make clear that at least according to current science, mortality could never be completely stopped. Speculations aside, modern physics forbids time travel to the past. Unfortunately, we can't just jump into a time machine to relive our youth over and over again. (Sounds a bit horrifying, actually.)

Causality is an unforgiving mistress. Also, unless you are a vampire (and there were times in my past when I wished I were one) and thus beyond submitting to the laws of physics, you can't really escape the second law of thermodynamics: even an open system like the human body, able to interact with its external environment and absorb nutrients and energy from it, will slowly deteriorate. In time, we burn too much oxygen. We live and we rust. Herein life's cruel compromise: we need to eat to stay alive. But by eating we slowly kill ourselves.

At the cellular level, the mitochondria are the little engines that convert food into energy. Starving cells live longer. Apparently, proteins from the sirtuin family contribute to this process, interfering with normal apoptosis, the cellular self-destruction program.

Could the right dose of sirtuin or something else be found to significantly slow down aging in humans? Maybe, in a few decades… Still at the cellular level, genetic action may also interfere with the usual mitochondrial respiration. Reduced expression of the mclk1 gene has been shown to slow down aging in mice. Something similar was shown to happen in C. Elegans worms. The results suggest that the same molecular mechanism for aging is shared throughout the animal kingdom.

We can speculate that, say, by 2040, a combination of these two mechanisms may have allowed scientists to unlock the secrets of cellular aging. It's not the elixir of life that alchemists have dreamt of, but the average life span could possibly be increased to 125 years or even longer, a significant jump from the current US average of about 75 years. Of course, this would create a terrible burden on social security. But retirement age by then would be around 100 or so.

A second possibility is more daring and probably much harder to become a reality within my next 50 or so years of life. Combine human cloning with a mechanism to store all our memories in a giant database. Inject the clone of a certain age with the corresponding memories. Voilà! Will this clone be you? No one really knows. Certainly, just the clone without the memories won't do. We are what we remember.

To keep on living with the same identity, we must keep on remembering. Unless, of course, you don't like yourself and want to forget the past. So, assuming such tremendous technological jump is even feasible, we could migrate to a new copy of ourselves when the current one gets old and rusty. Some colleagues are betting such technologies will become available within the century.

Although I'm an optimist by nature, I seriously doubt it. I probably will never know, and my colleagues won't either. However, there is no question that controlling death is the ultimate human dream, the one "thing that can change everything else." I leave the deeply transforming social and ethical upheaval this would cause to another essay. Meanwhile, I take advice from Mary Shelley's Frankenstein. Perhaps there are things we are truly unprepared for.

Philosopher, University of Oxford; Editor, Human Enhancement


Intelligence is a big deal. Humanity owes its dominant position on Earth not to any special strength of our muscles, nor any unusual sharpness of our teeth, but to the unique ingenuity of our brains. It is our brains that are responsible for the complex social organization and the accumulation of technical, economic, and scientific advances that, for better and worse, undergird modern civilization.

All our technological inventions, philosophical ideas, and scientific theories have gone through the birth canal of the human intellect. Arguably, human brain power is the chief rate-limiting factor in the development of human civilization.

Unlike the speed of light or the mass of the electron, human brain power is not an eternally fixed constant. Brains can be enhanced. And, in principle, machines can be made to process information as efficiently as — or more efficiently than — biological nervous systems.

There are multiple paths to greater intelligence. By "intelligence" I here refer to the panoply of cognitive capacities, including not just book-smarts but also creativity, social intuition, wisdom, etc.

Let's look first at how we might enhance our biological brains. There are of course the traditional means: education and training, and development of better methodologies and conceptual frameworks. Also, neurological development can be improved through better infant nutrition, reduced pollution, adequate sleep and exercise, and prevention of diseases that affect the brain. We can use biotech to enhance cognitive capacity, by developing pharmaceuticals that improve memory, concentration, and mental energy; or we could achieve these ends with genetic selection and genetic engineering. We can invent external aids to boost our effective intelligence — notepads, spreadsheets, visualization software.

We can also improve our collective intelligence. We can do so via norms and conventions — such as the norm against using ad hominem arguments in scientific discussions — and by improving epistemic institutions such the scientific journal, anonymous peer review, and the patent system. We can increase humanity's joint problem-solving capacity by creating more people or by integrating a greater fraction of the world's existing population into productive endeavours, and we can develop better tools for communication and collaboration — various internet applications being recent examples.

Each of these ways of enhancing individual and collective human intelligence holds great promise. I think they ought to be vigorously pursued. Perhaps the smartest and wisest thing the human species could do would be to work on making itself smarter and wiser.

In the longer run, however, biological human brains might cease to be the predominant nexus of Earthly intelligence.

Machines will have several advantages: most obviously, faster processing speed — an artificial neuron can operate a million times faster than its biological counterpart. Machine intelligences may also have superior computational architectures and learning algorithms. These "qualitative" advantages, while harder to predict, may be even more important than the advantages in processing power and memory capacity. Furthermore, artificial intellects can be easily copied, and each new copy can — unlike humans — start life fully-fledged and endowed with all the knowledge accumulated by its predecessors. Given these considerations, it is possible that one day we may be able to create "superintelligence": a general intelligence that vastly outperforms the best human brains in every significant cognitive domain.

The spectrum of approaches to creating artificial (general) intelligence ranges from completely unnatural techniques, such as those used in good old-fashioned AI, to architectures modelled more closely on the human brain. The extreme of biological imitation is whole brain emulation, or "uploading". This approach would involve creating a very detailed 3d map of an actual brain — showing neurons, synaptic interconnections, and other relevant detail — by scanning slices of it and generating an image using computer software. Using computational models of how the basic elements operate, the whole brain could then be emulated on a sufficiently capacious computer.

The ultimate success of biology-inspired approaches seems more certain, since they can progress by piecemeal reverse-engineering of the one physical system already known to be capable of general intelligence, the brain. However, some unnatural or hybrid approach might well get there sooner.

It is difficult to predict how long it will take to develop human-level artificial general intelligence. The prospect does not seem imminent. But whether it will take a couple of decades, many decades, or centuries, is probably not something that we are currently in a position to know. We should acknowledge this uncertainty by assigning some non-trivial degree of credence to each of these possibilities.

However long it takes to get from here to roughly human-level machine intelligence, the step from there to superintelligence is likely to be much quicker. In one type of scenario, "the singularity hypothesis", some sufficiently advanced and easily modifiable machine intelligence (a "seed AI") applies its wits to create a smarter version of itself. This smarter version uses its greater intelligence to improve itself even further. The process is iterative, and each cycle is faster than its predecessor. The result is an intelligence explosion. Within some very short period of time — weeks, hours — radical superintelligence is attained.

Whether abrupt and singular, or more gradual and multi-polar, the transition from human-level to superintelligence would of pivotal significance. Superintelligence would be the last invention biological man would ever need to make, since, by definition, it would be much better at inventing than we are. All sorts of theoretically possible technologies could be developed quickly by superintelligence — advanced molecular manufacturing, medical nanotechnology, human enhancement technologies, uploading, weapons of all kinds, lifelike virtual realities, self-replicating space-colonizing robotic probes, and more. It would also be super-effective at creating plans and strategies, working out philosophical problems, persuading and manipulating, and much else beside.

It is an open question whether the consequences would be for the better or the worse. The potential upside is clearly enormous; but the downside includes existential risk. Humanity's future might one day depend on the initial conditions we create, in particular on whether we successfully design the system (e.g., the seed AI's goal architecture) in such a way as to make it "human-friendly" — in the best possible interpretation of that term.

Neuroscientist, University of Washington School of Medicine; Author, Global Fever


Climate will change our worldview. That each of us will die someday ranks up there with 2+2=4 as one of the great certainties of all time. But we are accustomed to think of our civilization as perpetual, despite all of the history and prehistory that tells us that societies are fragile. The junior-sized slices of society such as the church or the corporation, also assumed to outlive the participant, provide us with everyday reminders of bankruptcy. Climate change is starting to provide daily reminders, challenging us to devise ways to build in resiliency, an ability to bounce back when hit hard.

Climate may well force on us a major change in how science is distilled into major findings. There are many examples of the ponderous nature of big organizations and big projects. While I think that the IPCC deserves every bit of its hemi-Nobel, the emphasis on "certainty" and the time required for a thousand scientists and a hundred countries to reach unanimous agreement probably added up to a considerable delay in public awareness and political action.

Climate will change our ways of doing science, making some areas more like medicine with its combination of science and interventional activism, where delay to resolve uncertainties is often not an option. Few scientists are trained to think this way — and certainly not climate scientists, who are having to improvise as the window of interventional opportunity shrinks.

Climate will, at times, force a hiatus on doing science as usual, much like what happened during World War II when many academics laid aside their usual teaching and research interests to intensively focus on the war effort.

The big working models of fluid dynamics used to simulate ocean and atmospheric circulation will themselves be game-changing for other fields of dynamics, such as brain processing and decision making. They should be especially important as they are incorporated into economic research. Climate problems will cause economies to stagger and we have just seen how fragile they are. Unlike 1997 when currency troubles were forced by a big El Niño and its associated fires in southeast Asia, the events of 2008 show that, even without the boat being rocked by external events, our economy can partially crash just from internal instabilities, equivalent to trying to dance in a canoe. Many people will first notice climate change elsewhere via the economic collapse that announces it.

That something as local as a U.S. housing bubble could trigger a worldwide recession shows us just how much work we have to do in "earthquake retrofits" for our economy. Climate-proofing our financial flows will rely heavily on good models of economic dynamics, studies of how things can go badly wrong within a month. With such models, we can test candidates for economic crash barriers.

Finally, climate's challenges will change our perspective on the future. Long-term thinking can be dangerous if it causes us to neglect the short term hazards. A mid-century plan for emissions reduction will be worthless if the Amazon rain forest burns down during the next El Niño.

Curator, TED Conference


Today when we think of the world's teeming billions of humans, we tend to think: overpopulation, poverty, disease, instability, environmental destruction. They are the cause of most of the planet's problems.

What if that were to change? What if the average human were able to contribute more than consume? To add more than subtract? Think of the world as if each person drives a balance sheet. On the negative side are the resources they consume without replacing, on the positive side are the contributions they make to the planet in the form of the resources they produce, the lasting artifacts-of-value they build, and the ideas and technologies that might create a better future for their family, their community and for the planet as a whole. Our whole future hangs on whether the sum of those balance sheets can turn positive.

What might make that possible? One key reason for hope is that so far we have barely scraped the surface of human potential. Throughout history, the vast majority of humans have not been the people they could have been.

Take this simple thought experiment. Pick your favorite scientist, mathematician or cultural hero. Now imagine that instead of being born when and where they were, they had instead been born with the same in-built-but-unlocked abilities in a typical poverty-stricken village in, say, the France of 1200 or the Ethiopia of 1980. Would they have made the contribution they made? Of course not. They would never have received the education and encouragement it took to achieve what they did. Instead they would have simply lived out a life of poverty, with perhaps an occasional yearning that there must be a better way.

Conversely, an unknown but vast number of those grinding out a living today have the potential to be world-changers... if only we could find a way of unlocking that potential.

Two ingredients might be enough to do that. Knowledge and inspiration. If you learn of ideas that could transform your life, and you feel the inspiration necessary to act on that knowledge, there's a real chance your life will indeed be transformed.

There are many scary things about today's world. But one that is truly thrilling is that the means of spreading both knowledge and inspiration have never been greater. Five years ago, an amazing teacher or professor with the ability to truly catalyze the lives of his or her students could realistically hope to impact maybe 100 people each year. Today that same teacher can have their words spread on video to millions of eager students. There are already numerous examples of powerful talks that have spread virally to massive Internet audiences.

Driving this unexpected phenomenon is the fact that the physical cost of distributing a recorded talk or lecture anywhere in the world via the internet has fallen effectively to zero. This has happened with breathtaking speed and its implications are not yet widely understood. But it is surely capable of transforming global education.

For one thing, the realization that today's best teachers can become global celebrities is going to boost the caliber of those who teach. For the first time in many years it's possible to imagine ambitious, brilliant 18-year-olds putting 'teacher' at the top of their career choice list. Indeed the very definition of "great teacher" will expand, as numerous others outside the profession with the ability to communicate important ideas find a new incentive to make that talent available to the world. Additionally every existing teacher can greatly amplify their own abilities by inviting into their classroom, on video, the world's greatest scientists, visionaries and tutors. (Can a teacher inspire over video? Absolutely. We hear jaw-dropping stories of this every day.)

Now think about this from the pupils' perspective. In the past, everyone's success has depended on whether they were lucky enough to have a great mentor or teacher in their neighborhood. The vast majority have not been fortunate. But a young girl born in Africa today will probably have access in 10 years' time to a cell phone with a high-resolution screen, a web connection, and more power than the computer you own today. We can imagine her obtaining face-to-face insight and encouragement from her choice of the world's great teachers. She will get a chance to be what she can be. And she might just end up being the person who saves the planet for our grandchildren.

Independent Researcher; Author, Dinosaurs of the Air


Predicting what has the potential to change everything — really change everything — in this century is not difficult. What I cannot know is whether I will live to see it, the data needed to reliably calculate the span of my mind's existence being insufficient.

According to the current norm I can expect to last another third of century. Perhaps more if I match my grandmother's life span — born in a Mormon frontier town the same year Butch Cassidy, the Sundance Kid and Etta Place sailed for Argentina, she happily celebrated her 100th birthday in 2001. But my existence may exceed the natural ceiling. Modern medicine has maximized life spans by merely inhibiting premature death. Sooner or later that will become passé as advancing technology renders death optional.

Evolution whether biological or technological has been speeding up over time as the ability to acquire, process and exploit information builds upon itself. Human minds adapted to comprehend arithmetic growth tend to underestimate exponential future progress. Born two years before the Wright's first flight, my young grandmother never imagined she would cross continents and oceans in near sonic flying machines. Even out of the box thinkers did not predict the hyperexpansion of computing power over the last half century. It looks like medicine is about to undergo a similar explosion. Extracellular matrix powder derived from pig bladders can regrow a chopped off finger with a brand new tip complete with nail. Why not regenerate entire human arms and legs, and organs?

DARPA funded researchers predict that we may soon be "replacing damaged and diseased body parts at will, perhaps indefinitely." Medical corporations foresee a gold mine in repairing and replacing defective organs using the cells from the victims' own body (avoiding the whole rejection problem). If assorted body parts ravaged by age can be reconstructed with tissues biologically as young and healthy as those of children, then those with the will and resources will reconstruct their entire bodies.

Even better is stopping and then reversing the very process of aging. Humans, like parrots, live exceptionally long lives because we are genetically endowed with unusually good cellular repair mechanisms for correcting the damage created by free radicals. Lured by the enormous market potential, drugs are being developed to tweak genes to further upgrade the human repair system. Other pharmaceuticals are expected to mimic the life extension that appears to stem from the body's protective reaction to suppressed caloric intake. It's quite possible, albeit not certain, that middle-aged humans will be able to utilize the above methods to extend their lives indefinitely. But keeping our obsolescing primate bodies and brains up and running for centuries and millennia will not be the Big Show.

The human brain and the mind it generates have not undergone a major upgrade since the Pleistocene. And they violate the basic safety rule of information processing — that it is necessary to back up the data. Something more sophisticated and redundant is required. With computing power doubling every year or two cheap personal computers should match the raw processing power of the human brain in a couple of decades, and then leave it in the dust.

If so, it should be possible to use alternative, technological means to produce conscious thought. Efforts are already underway to replace damaged brain parts such as the hippocampus with hypercomputer implants. If and when the initial medical imperative is met, elective implants will undoubtedly be used to upgrade normal brain operations. As the fast evolving devices improve they will begin to outperform the original brain, it will make less and less sense to continue to do one's thinking in the old biological clunker, and formerly human minds will become entirely artificial as they move into ultra sophisticated, dispersed robot systems.

Assuming that the above developments are practical, technological progress will not merely improve the human condition, it should replace it. The conceit that humans in anything like their present form will be able to compete in a world of immortal superminds with unlimited intellectual capacity is naïve; there simply will not be much for people to do. Do not for a minute imagine a society of crude Terminators, or Datas that crave to be as human as possible. Future robots will be devices of subtle sophistication and sensitivity that will expose humans as the big brained apes we truly are. The logic predicts that most humans will choose to become robotic.

Stopping the CyberRevolution is probably not possible, the growing knowledge base should make the production of superintelligent minds less difficult and much faster than is replicating, growing and educating human beings. Trying to ban the technology will work as well as the war on drugs. The replacement of humanity with a more advanced system will be yet another evolutionary event on the scale of the Cambrian revolution, the Permian and K/C extinctions that produced and killed off the nonavian dinosaurs, and the advent of humans and the industrial age.

The scenario herein is not radical or particularly speculative, it seems so only because it has not happened yet. If the robotic civilization comes to pass it will quickly become mundane to us. The ability of cognitive minds to adjust is endless.

Here's a pleasant secondary effect — supernaturalistic religion will evaporate as ordinary minds become as powerful as gods. What will the cybersociety be like? Hardly have a clue. How much of this will I live to see? I'll find out.

Science Historian; Author, Darwin Among the Machines


The detection of extraterrestrial life, extraterrestrial intelligence, or extraterrestrial technology (there’s a difference) will change everything. The game could be changed completely by an extraterrestrial presence discovered (or perhaps not discovered) here on earth.

[email protected], our massively-distributed search for extraterrestrial communication, now links some five million terrestrial computers to a growing array of radio telescopes, delivering a collective 500 teraflops of fast Fourier transforms representing a cumulative two million years of individual processing time. Not a word (or even a picture) so far. However, as Marvin Minsky warned in 1970: "Instead of sending a picture of a cat, there is one area in which you can send the cat itself."

Life, assuming it exists elsewhere in the universe, will have had time to explore an unfathomable diversity of forms. Those best able to survive the passage of time, adapt to changing environments, and migrate unscathed across interstellar distances will become the most widespread. Life forms that assume digital representation, for all or part of their life cycle, will not only be able to send messages at the speed of light, they will be able to send themselves.

Digital organisms can be propagated economically even with extremely low probability of finding a host environment in which to germinate and grow. If the kernel is intercepted by a host that has discovered digital computing (whose ability to translate between sequence and structure, as Alan Turing and John von Neumann demonstrated, is as close to a universal common denominator as life and intelligence running on different platforms may be able to get) it has a chance. If we discovered such a kernel, we would immediately replicate it widely. Laboratories all over the planet would begin attempting to decode it, eventually compiling the coded sequence — intentionally or inadvertently — to utilize our local resources, the way a virus is allocated privileges within a host cell. The read-write privileges granted to digital organisms already include material technology, human minds, and, increasingly, biology itself. (What, exactly, are those screen savers doing at Dr. Venter’s laboratory during the night?)

According to Edward Teller, Enrico Fermi asked "Where is everybody?" at Los Alamos in 1950, when the subject of extraterrestrial beings came up over lunch. The answer to Fermi’s Paradox could be "We’ve arrived! Now help us unpack!" Fifty years later, over lunch at Stanford, I asked a 91-year-old Edward Teller (holding a wooden staff at his side like an Old Testament prophet) how Fermi’s question was holding up.

"Let me ask you," Teller interjected in his thick Hungarian accent. "Are you uninterested in extraterrestrial intelligence? Obviously not. If you are interested, what would you look for?"

"There's all sorts of things you can look for," I answered.  "But I think the thing not to look for is some intelligible signal... Any civilization that is doing useful communication, any efficient transmission of information will be encoded, so it won't be intelligible to us — it will look like noise."

"Where would you look for that?" asked Teller.

"I don't know..."

"I do!" 


"Globular clusters!" answered Teller.  "We cannot get in touch with anybody else because they choose to be so far away from us. In globular clusters, it is much easier for people at different places to get together.  And if there is interstellar communication at all, it must be in the globular clusters."

"That seems reasonable," I agreed. "My own personal theory is that extraterrestrial life could be here already... and how would we necessarily know? If there is life in the universe, the form of life that will prove to be most successful at propagating itself will be digital life; it will adopt a form that is independent of the local chemistry, and migrate from one place to another as an electromagnetic signal, as long as there's a digital world — a civilization that has discovered the Universal Turing Machine — for it to colonize when it gets there.  And that's why von Neumann and you other Martians got us to build all these computers, to create a home for this kind of life."

There was a long, drawn-out pause. "Look," Teller finally said, lowering his voice, "may I suggest that instead of explaining this, which would be hard... you write a science fiction book about it."

"Probably someone has," I said.

"Probably," answered Teller, "someone has not."


(the conversation with Edward Teller took place on 12 April 1999)


Publisher of Skeptic magazine, monthly columnist for Scientific American; Author, The Mind of the Market


It is January, named for the Roman God Janus (Latin for door), the doorway to the new year, and yet Janus-faced in looking to the past to forecast the future. This January, 2009, in particular, finds us at a crisis tipping point both economically and environmentally. If ever we needed to look to the past to save our future it is now. In particular, we need to do two things: (1) stop the implosion of the economy and enable markets to function once again both freely and fairly, and (2) make the transition from nonrenewable fossil fuels as the primary source of our energy to renewable energy sources that will allow us to flourish into the future. Failure to make these transformations will doom us to the endless tribal political machinations and economic conflicts that have plagued civilization for millennia. We need to make the transition to Civilization 1.0. Let me explain.

In a 1964 article on searching for extraterrestrial civilizations, the Soviet astronomer Nikolai Kardashev suggested using radio telescopes to detect energy signals from other solar systems in which there might be civilizations of three levels of advancement: Type 1 can harness all of the energy of its home planet; Type 2 can harvest all of the power of its sun; and Type 3 can master the energy from its entire galaxy.

Based on our energy efficiency at the time, in 1973 the astronomer Carl Sagan estimated that Earth represented a Type 0.7 civilization on a Type 0 to Type 1 scale. (More current assessments put us at 0.72.) As the Kardashevian scale is logarithmic — where any increase in power consumption requires a huge leap in power production — fossil fuels won’t get us there. Renewable sources such as solar, wind and geothermal are a good start, and coupled to nuclear power — perhaps even nuclear fusion (instead of the fission reactors we have now) could eventually get us to Civilization 1.0.

We are close. Taking our Janus-faced look to the past in order to see the future, let’s quickly review the history of humanity on its climb to become a Civilization 1.0:

Type 0.1: Fluid groups of hominids living in Africa. Technology consists of primitive stone tools. Intra-group conflicts are resolved through dominance hierarchy, and between-group violence is common.

Type 0.2: Bands of roaming hunter-gatherers that form kinship groups, with a mostly horizontal political system and egalitarian economy.

Type 0.3: Tribes of individuals linked through kinship but with a more settled and agrarian lifestyle. The beginnings of a political hierarchy and a primitive economic division of labor.

Type 0.4: Chiefdoms consisting of a coalition of tribes into a single hierarchical political unit with a dominant leader at the top, and with the beginnings of significant economic inequalities and a division of labor in which lower-class members produce food and other products consumed by non-producing upper-class members.

Type 0.5: The state as a political coalition with jurisdiction over a well-defined geographical territory and its corresponding inhabitants, with a mercantile economy that seeks a favorable balance of trade in a win-lose game against other states.

Type 0.6: Empires extend their control over peoples who are not culturally, ethnically or geographically within their normal jurisdiction, with a goal of economic dominance over rival empires.

Type 0.7: Democracies that divide power over several institutions, which are run by elected officials voted for by some citizens. The beginnings of a market economy.

Type 0.8: Liberal democracies that give the vote to all citizens. Markets that begin to embrace a nonzero, win-win economic game through free trade with other states.

Type 0.9: Democratic capitalism, the blending of liberal democracy and free markets, now spreading across the globe through democratic movements in developing nations and broad trading blocs such as the European Union.

Type 1.0: Globalism that includes worldwide wireless Internet access with all knowledge digitized and available to everyone. A global economy with free markets in which anyone can trade with anyone else without interference from states or governments. A planet where all states are democracies in which everyone has the franchise.

Looking from this past toward the future, we can see that the forces at work that could prevent us from reaching Civilization 1.0 are primarily political and economic, not technological. The resistance by non democratic states to turning power over to the people is considerable, especially in theocracies whose leaders would prefer we all revert to Type 0.4 chiefdoms. The opposition toward a global economy is substantial, even in the industrialized West, where economic tribalism still dominates the thinking of most people.

The game-changing scientific idea is the combination of energy and economics — the development of renewable energy sources made cheap and available to everyone everywhere on the planet by allowing anyone to trade in these game-changing technologies with anyone else. That will change everything.

Chair of Languages, Literatures, & Cultures, Professor of Linguistics and Anthropology, Illinois State University; Author, Don't Sleep, There Are Snakes


"We should really not be studying sentences; we should not be studying language — we should be studying people" Victor Yngve

Communication is the key to cooperation. Although cross-cultural communication for the masses requires translation techniques that exceed our current capabilities, the groundwork of this technology has already been laid and many of us will live to see a revolution in automatic translation that will change everything about cooperation and communication across the world.

This goal was conceived in the late 1940s in a famous memorandum by Rockefeller Foundation scientist, Warren Weaver, in which he suggested the possibility of machine translation and tied its likelihood to four proposals, still controversial today: that there was a common logic to languages; that there were likely to be language universals; that immediate context could be understood and linked to translation of individual sentences; and that cryptographic methods developed in World War II would apply to language translation. Weaver's proposals got off the ground financially in the early 1950s as the US military invested heavily in linguistics and machine translation across the US, with particular emphasis on the research of the team of Victor Yngve at the Massachusetts Institute of Technology's Research Laboratory of Electronics (a team that included the young Noam Chomsky).

Yngve, like Weaver, wanted to contribute to international understanding by applying the methods of the then incipient field that he helped found, computational linguistics, to communication, especially machine translation. Early innovators in this area also included Claude Shannon at Bell Labs and Yehoshua Bar-Hillel who preceded Yngve at MIT before returning to Israel. Shannon was arguably the inventor of the concept of information as an entity that could be scientifically studied and Bar-Hillel was the first person to work full-time on machine translation, beginning the program that Yngve inherited at MIT.

This project was challenged early on, however, by the work of Chomsky, from within Yngve's own lab. Chomsky's conclusions about different grammar types and their relative generative power convinced people that grammars of natural languages were not amenable to machine translation efforts as they were practiced at the time, leading to a slowdown in and reduction of enthusiasm for computationally-based translation.

As we have subsequently learned, however, the principal problem faced in machine-translation is not the formalization of grammar per se, but the inability of any formalization known, including Chomsky's, to integrate context and culture (semantics and pragmatics in particular) into a model of language appropriate for translation. Without this integration, mechanical translation from one language to another is not possible.

Still, mechanical procedures able to translate most contents from any source language into accurate, idiomatically natural constructions of any target language seem less utopian to us now because of major breakthroughs that have led to several programs in machine translation (e.g. the Language Technologies Institute at Carnegie Mellon University). I believe that we will see within our lifetime the convergence of developments in artificial intelligence, knowledge representation, statistical grammar theories, and an emerging field — computational anthropology (informatic-based analysis and modeling of cultural values) — that will facilitate powerful new forms of machine translation to match the dreams of early pioneers of computation.

The conceptual breakthroughs necessary for universal machine translation will also require contributions from Construction Grammars, which view language as a set of conventional signs (varieties of the idea that the building blocks of grammar are not rules or formal constraints, but conventional phrase and word forms that combine cultural values and grammatical principles), rather than a list of formal properties. They will have to look at differences in the encoding of language and culture across communities, rather than trying to find a 'universal grammar' that unites all languages.

At least some of the steps are easy enough to imagine. First, we come up with a standard format for writing statistically-based Construction Grammars of any language, a format that displays the connections between constructions, culture, and local context (such as the other likely words in the sentence or other likely sentences in the paragraph in which the construction appears). This format might be as simple as a flowchart or a list. Second, we develop a method for encoding context and values. For example, what are the values associated with words; what are the values associated with certain idioms; what are the values associated with the ways in which ideas are expressed? The latter can be seen in the notion of sentence complexity, for example, as in the Pirahã of the Amazon's (among others) rejection of recursive structures in syntax because they violate principles of information rate and new vs. old information in utterances that are very important in Pirahã culture. Third, we establish lists of cultural values and most common contexts and how these link to individual constructions. Automating the procedure for discovering or enumerating these links will take us to the threshold of automatic translation in the original sense.

Information and its exchange form the soul of human cultures. So just imagine the possible change in our perceptions of 'others' when we able to type in a story and have it automatically and idiomatically translated with 100% accuracy into any language for which we have a grammar of constructions. Imagine speaking into a microphone and having your words come out in the language of your audience, heard and understood naturally. Imagine anyone being able to take a course in any language from any university in the world over the internet or in person, without having to first learn the language of the instructor.

These will always be unreachable goals to some degree. It seems unlikely, for example, that all grammars and cultures are even capable of expressing everything from all languages. However, we are developing tools that will dramatically narrow the gaps and help us decide where and how we can communicate particular ideas cross-culturally. Success at machine translation might not end all the world's sociocultural or political tensions, but it won't hurt. One struggles to think of a greater contribution to world cooperation than progress to universal communication, enabling all and sundry to communicate with nearly all and sundry. Babel means 'the gate of god'. In the Bible it is about the origin of world competition and suspicion. As humans approached the entrance to divine power by means of their universal cooperation via universal communication, so the biblical story goes, language diversity was introduced to destroy our unity and deprive us of our full potential.

But automated, near-universal translation is coming. And it will change everything.

| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 |

next >