| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |

next >




2009

WHAT WILL CHANGE EVERYTHING?


Back to Index

W. DANIEL HILLIS
Physicist, Computer Scientist; Chairman, Applied Minds, Inc.; Author, The Pattern on the Stone

A FOREBRAIN FOR THE WORLD MIND

In 1851, Nathaniel Hawthorne wrote, "Is it a fact — or have I dreamt it — that, by means of electricity, the world of matter has become a great nerve, vibrating thousands of miles in a breathless point of time? Rather, the round globe is a vast head, a brain, instinct with intelligence!" He was writing about the telegraph, but today we make essentially the same observation about the Internet.

One might suppose that, with all its zillions of transistors and billions of human minds, the world brain would be thinking some pretty profound thoughts. There is little evidence that this is so. Today's Internet functions mostly as a giant communications and storage system, accessed by individual humans. Although much of human knowledge is represented in some form within the machine, it is not yet represented in a form that is particularly meaningful to the machine. For the most part, the Internet knows no more about the information it handles than the telephone system knows about the conversations that take place over its lines. Most of those zillions of transistors are either doing something very trivial or nothing at all, and most of those billions of human minds are doing their own thing.

If there is such a thing as a world mind today, then its thoughts are primarily about commerce. It is the "invisible hand" of Adam Smith, deciding the prices, allocating the capital. Its brain is composed not only of the human buyers and sellers, but also of the trading programs on Wall Street and of the economic models of the central banks. The wires "vibrating thousands of miles in a breathless point of time" are not just carrying messages between human minds, they are participating in the decisions of the world mind as a whole. This unconscious system is the world's hindbrain.

I call this the hindbrain because it is performing unconscious functions necessary to the organism's own survival, functions that are so primitive that they predate development of the brain. Included in this hindbrain are the functions of preference and attention that create celebrity, popularity and fashion, all fundamental to the operation of human society. This hindbrain is ancient. Although it has been supercharged by technology, growing in speed and capacity, it has grown little in sophistication. This global hindbrain is subject to mood swings and misjudgments, leading to economic depressions, panics, witch-hunts, and fads. It can be influenced by propaganda and by advertising. It is easily misled. As vital as the hindbrain is for survival, it is not very bright.

What the world mind really needs is a forebrain, with conscious goals, access to explicit knowledge, and the ability to reason and plan. A world forebrain would need the capacity to perceive collectively, to decide collectively, and to act collectively. Of these three functions, our ability to act collectively is the most developed.

For thousands of years we have understood methods for breaking a goal into sub-goals that can be accomplished by separate teams, and for recursively breaking them down again and again until they can be accomplished by individuals. This management by hierarchy scales well. I can imagine that the construction of the pyramids was a celebration of its discovery. The hierarchical teams that built these monuments were an extension of the pharaoh's body, the pyramid a dramatic demonstration of his power to coordinate the efforts of many. Pyramid builders had to keep their direct reports within shouting distance, but electronic communication has allowed us to extend our virtual bodies, literally corporations, to a global scale. The Internet has even allowed such composite action to organize itself around an established goal, without the pharaoh. The Wikipedia is our Great Pyramid.

The collective perception of the world mind is also relatively well developed. The most important recent innovations have been search and recommendation engines, which combine the inputs of humans with machine algorithms to produce a useful result. This is another area where scale helps. Many eyes and many judgments are combined into a collective perception that is beyond the scope of any individual. The weak point is that the result of all this collective perception is just a recommendation list. For the world mind to truly perceive, it will need a way of sharing more general forms of knowledge, in a format that can be understood by both humans and machines. Various new companies are beginning to do just that.

What is still missing is the ability for a group of people (or people and machines) to make collective decisions with intelligence greater than the individual. This can sometimes be accomplished in small groups through conversation, but the method does not scale well. Generally speaking, technology has made the conversation larger, but not smarter. For large groups, the state-of-the-art method for collective decision-making is still the vote. Voting only works to the degree that, on average, each voter is able to individually determine the right decision. This is not good enough. We need an intelligence that will scale with the size of our problems.

So this is the development that will make a difference: a method for groups of people and machines to work together to make decisions in a way that takes advantage of scale. With such a scalable method for collective decision-making, our zillions of transistors and billions of brains can be used to advantage, giving the collective mind a way to focus our collective actions. Given this, we will finally have access to intelligence greater than our own. The world mind will finally have a forebrain, and this will change everything.


MARCO IACOBONI
Neuroscientist, UCLA Brain Mapping Center; author, Mirroring People

IMMORTAL COGNITION, BOUNDLESS HAPPINESS

Life expectancy has dramatically increased over the last 100 years. At the beginning of last century, the average life expectancy was 30-40 years, while the current world average life expectancy is almost 70 years. Unfortunately there are still great variations in life expectancy, between countries (guess what? people living in more developed countries live longer...) and within countries. (Guess what? Wealthier people live longer...). Today, people in the wealthier strata of developing countries can expect to live more than 80 years old. While the disparity in life expectancy is a policy issue (not discussed here), the overall dramatic increase in life expectancy brings out some interesting science issues. How can we fight the cognitive decline associated with aging (a side effect of the nice fact that we live longer)? How can we fix mood disorders often associated with a general cognitive decline? The real game changer will be the immortal cognition (well, not really, but close enough) and boundless happiness (ok, again, not really, but close enough) provided by painless brain stimulation.

Today, we have two main ways of stimulating the brain painlessly and non invasively: Transcranial Magnetic Stimulation (TMS) and Transcranial Direct Current Stimulation (TDCS). TMS stimulates the brain by inducing local magnetic fields over the scalp (which in turn induce electric currents in the brain), whereas TDCS uses weak direct currents. There are many different ways of stimulating the brain, and obviously many brain areas can be stimulated. We will be able to delay significantly cognitive decline and improve mood by stimulating brain areas that are collectively called 'association cortices.' Association cortices connect many other brain areas (their name comes from the fact that they associate many brain areas in neural networks). There are two main types of association cortices, in the front of the brain (called 'anterior' association cortices) and in the back of the brain (called 'posterior' association cortices). TMS has already been experimentally used for some years to treat depression by stimulating the anterior association cortices. The results are so encouraging that TMS is now an approved treatment for depression in many countries (FDA has approved it for the United States in October 2008). I believe we will see in the next two decades a great improvement in our ability to stimulate the brain to treat mood disorders. We will improve the hardware and the 'stimulation protocols' (how frequently we stimulate and for how long). We will also improve our ability to target specific parts of the anterior association cortices using brain imaging. Each brain is slightly different, in both anatomy and physiological responses. Brain stimulation coupled with brain imaging will allow to design specific treatments tailored to specific individuals, resulting in highly effective treatments.

The posterior association cortices (the ones in the back of the brain) are the first ones affected by Alzheimer's disease, a degenerative brain disorder affecting higher cognitive functions, for instance memory. The posterior association cortices also have reduced activity in the less dramatic cognitive decline that is often associated with aging. Brain stimulation will facilitate the activity of the posterior association cortices in the elderly by inducing synchronized firing of many neurons at specific frequencies. Synchronous neuronal firing at certain frequencies is thought to be critical for perceptual and cognitive processes. Our aging brain will get its synchronized neuronal firing going thanks to brain stimulation.

A final touch (a critical one, I would say) will be given by our ability to induce specific brain states during brain stimulation. The brain never rests, obviously. Brain stimulation always stimulates the brain in a given state. The effect of brain stimulation can be thought of the interaction between the stimulation itself and the state of the brain while it is stimulated. Stimulating the brain while inducing specific brain states in the stimulated subject (for instance, playing word association games that require the subject to associate words together, or showing the subject stimuli that are more easily associated with happiness) will result in much more effective treatments of cognitive decline and mood disorder.

This will be a real game changer. If my prediction is correct, we will also see dramatic changes in policy. People won't tolerate to be excluded from the beneficial effects of brain stimulation. Right now, people don't easily grasp insidious environmental factors or subtle differences in health care that result in dramatic individual differences in the long term (approximately ten years of life between the wealthy and the poor living in the same country), but they will immediately grasp the beneficial effects of brain stimulation, and will demand not to be excluded anymore. That's also a game changer.


LISA RANDALL
Physicist, Harvard University; author, Warped Passages

COORDINATED AND EXPANDED COMPUTATIONAL POWER WILL CHANGE SCIENCE

Predicting the future is notoriously difficult. Towards the end of the 19th century, the famous physicist William Thomson, more commonly known as Lord Kelvin, proclaimed the end of physics. Despite the silliness of declaring a field moribund, particularly one that had been subject to so many important developments not so long before Thomson's ill-fated pronouncement, you can't really fault the poor devil for not foreseeing quantum mechanics and relativity and the revolutionary impact they would have. Seriously, how could anyone, even someone as smart as Lord Kelvin, have predicted quantum mechanics?

So I'm not going to even try. I'll stick to a safer (and more prosaic) prediction that has already begun its realization. Increases in computing power, in part through shared computational resources, are likely to transform the nature of science and further revolutionize the spread of information. Individual computing power might increase according to Moore's Law but a more discrete jump in computational power should also result from clever uses of computers in concert.

Already we have seen SETI allow for a large-scale search for extraterrestrial signals that would not be possible with any individual computer. Protein folding is currently being studied through a distributed computational effort.

Currently CERN is developing "grid computing" to allow the increase in computational power that will be required to analyze the enormous amount of Large Hadron Collider (LHC) data. Though the grid system would be hard pressed to have the transformative power the World Wide Web (also developed at CERN), the jump in computational power that can be possible with processors coordinated the way that data currently is can have enormous transformational consequences.

Modern science has two different streams that face very different challenges. Physicists and biologists today, for example, ask very different sorts of questions and use somewhat different methods. Traditionally scientists have searched for the smallest and most basic components from which the behavior of large complex systems can be derived. This mode has been extremely successful in understanding and interpreting the physical world.

For example, it has also helped us understand the operation of the human body. I am betting this reductionist approach will continue to work for some fields of science such as particle physics.

However, understanding some of the complex systems that modern science now studies is unlikely to be so "simple". Although the LHC's search for more fundamental building blocks is likely to be rewarded with deeper understanding of the substructure of matter, it is not obvious that the most basic structure of biological systems will be understood with as straightforward a reductionist approach.

Very likely individual elements will work in conjunction with their environment or in collaboration with other system elements to produce emergent effects. Already we have learned that the genetic code is not sufficient to predict behavior, but gene's environments that determine which genes are triggered also play a big role. Very likely understanding the brain will require understanding coordinated dynamics as much as any individual element. Many diseases too are unlikely to be completely cured until the complicated dynamics among different elements is fully processed.

How can massive computing power affect such science? It will clearly not replace experiments or the need to identify individual fundamental elements. But it will make us better able to understand systems and how elements work in conjunction. Massive simulations "experiments" will help determine how feedback loops work and how any individual element works in concert with the system as a whole. Such "experiments" will also help determine when current data is insufficient in that systems are more sensitive to individual elements than anticipated. Computation alone will not solve problems — the full creativity of scientific minds will still be needed — but computational advances will allow researchers to explore hypotheses efficiently.

At a broader level (although one that will affect science too) coordinated and expanded computational power will also allow a greatly expanded use of the huge amounts of underutilized information that is currently available. Searching is likely to become a more refined process where one can ask for particular types of data more finely honed to one's needs. Imagine how much faster and easier "googling" could be in a world where you "feel lucky" every time (or at least significantly more often).

The advance I am suggesting isn't a quantum leap. It's not even a revolution since it's simply an adiabatic evolution of advances that are currently occurring. But when one asks about science in twenty years, coordinated computation is likely to be one of the contributing factors that will change many things — though not necessarily everything.


PAUL SAFFO
Technology forecaster

DISCOVERY (OR CREATION) OF NON-HUMAN INTELLIGENCE CURES HUMANKIND’S EXISTENTIAL LONELINESS

Accelerating change is the new normal. Even the most dramatic discoveries waiting in the wings will do little more than push us further along the rollercoaster of exponential arcs that define modern life. Momentous discoveries compete with Hollywood gossip for headline space, as a public accustomed to a steady diet of surprises reacts to the latest astonishing science news with a shrug.

But there is one development that would fundamentally change everything — the discovery of non-human intelligences equal or superior to our species.  It would change everything because our crowded, quarreling species is lonely. Vastly, achingly, existentially lonely. It is what compels our faith in gods whose existence lies beyond logic or proof. It is what animates our belief in spirits and fairies, sprites, ghosts and little green men. It is why we probe the intelligence of our animal companions, hoping to start a conversation.  We are as lonely as Defoe’s Crusoe. We desperately want someone else to talk to.

The search for extraterrestrial intelligences — SETI — began 50 years ago with a lone radio astronomer borrowing spare telescope time to examine a few frequencies in the direction of two nearby stars. The search today is being conducted on a continuous basis with supercomputers and sophisticated receivers like the SETI Institute’s Allen Telescope Array. Today’s systems search more radio space in a few minutes than was probed in SETI’s first decade. Meanwhile, China is breaking ground on a new 500-meter dish (that’s a receiving surface the size of 30 football fields, or 10 times the size of Arecibo) whose mission explicitly encompasses the search for other civilizations.

Astronomers are looking as well as listening. Over 300 extrasolar planets have been detected, all but 12 in the last decade and over 100 in the last two years alone. More significantly, the minimum size of detectable extrasolar planets has plummeted, making it possible to identify planets with masses similar to earth. Planetary discovery is poised to go exponential with the 2009 launch of NASA’s Kepler spacecraft, which will examine over 100,000 stars for the presence of terrestrial-sized planets. The holy grail of planet hunters isn’t Jupiter-sized giants, but other Earths suitable for intelligent life recognizable to us.

The search so far has been met only by a great silence, but as astronomers continue their hunt for intelligent neighbors, computer scientists are determined to create them. Artificial intelligence research has been underway for decades and a few AIs have arguably already passed the Turing Test. Apply the exponential logic of Moore’s Law and the arrival of strong AI in the next few decades seems inevitable. We will have robots smart enough to talk to, and so emotionally appealing that people will demand the right to marry them.

One way or another, humanity will find someone or some thing to talk to. The only uncertainty is where the conversations will lead. Distant alien civilizations will make for difficult exchange because of the time lag, but the mere fact of their existence will change our self-perception as profoundly as Copernicus did five centuries ago. And despite the distance, we will of course try to talk to them. A third of us will want to conquer them, a third of us will seek to convert them, and the rest of us will try to sell them something.

Artificial companions will make for more intimate conversations, not just because of their proximity, but because they will speak our language from the first moment of their stirring sentience. However, I fear what might happen as they evolve exponentially. Will they become so smart that they no longer want to talk to us? Will they develop an agenda of their own that makes utterly no sense from a human perspective? A world shared with super-intelligent robots is a hard thing to imagine. If we are lucky, our new mind children will treat us as pets. If we are very unlucky, they will treat us as food.


Eric Kandel
Biochemist and University Professor, Columbia University; Recipient, The Nobel Prize, 2000; author, In Search of Memory

BIOLOGICAL MARKERS FOR MENTAL ILLNESS

Biology in general and the biology of mind in particular have become powerful scientific disciplines. But a major lack in the current science of mind is a satisfactory understanding of the biological basis of almost any mental illness. Achieving a biological understanding of schizophrenia, manic-depressive illness, unipolar depression, anxiety states, or obsessional disorders would be a paradigm shift for the biology of mind. It would not only inform us about some of the most devastating diseases of humankind, but since these are diseases of thought and feeling, understanding them would also tell us more about who we are and how we function.

To illustrate the embarrassing lack of science in this area, let me put this problem into a historical perspective with two personal introductory comments.

First, in the 1960s, when I was a psychiatric resident at the Massachusetts Mental Health Center, of the Harvard Medical School, most psychiatrists thought that the social determinants of behavior were completely independent of the biological determinants and that each acted on different aspects of mind. Psychiatric illnesses were classified into two major groups — organic mental illnesses and functional mental illnesses — based on presumed differences in origin. That classification, which dated to the nineteenth century, emerged from postmortem examinations of the brains of mental patients.

The methods available for examining the brain at that time were too limited to detect subtle anatomical changes. As a result, only mental disorders that entailed significant loss of nerve cells and brain tissue such as Alzheimer's disease, Huntington's disease, and chronic alcoholism were classified as organic diseases, based on biology. Schizophrenia, the various forms of depression, and the anxiety states produced no readily detectable loss of nerve cells or other obvious changes in brain anatomy and therefore were classified as functional, or not based on biology. Often, a special social stigma was attached to the so-called functional mental illnesses because they were said to be "all in a patient's mind." This notion was accompanied by the suggestion that the illness may have been put into the patient's mind by his or her parents.

With the passage of forty years we have made progress and the advent of a paradigm shift for the science of the mind. We no longer think that only certain diseases affect mental states through biological changes in the brain. Indeed, the underlying precept of the new science of mind is that all mental processes are biological — they all depend on organic molecules and cellular processes that occur literally "in our heads." Therefore, any disorder or alteration of those processes must also have a biological basis.

Second, in 2001 Max Cowan and I were asked to write a review for the Journal of the American Medical Association about molecular biological contributions to neurology and. In writing the review, we were struck by the radical way in which molecular genetics had transformed neurology. This led me to wonder why molecular biology has not had a similar transformative effect on psychiatry.

The fundamental reason is that neurological diseases and psychiatric diseases differ in several four important ways.

  1. Neurology has long been based on the knowledge of where in the brain specific diseases are located. The diseases that form the central concern of neurology — strokes, tumors, and degenerative diseases of the brain — produce clearly discernible structural damage. Studies of those disorders taught us that, in neurology, location is key. We have known for almost a century that Huntington's disease is a disorder of the caudate nucleus of the brain, Parkinson's disease is a disorder of the substantia nigra, and amyotrophic lateral sclerosis (ALS) is a disorder of motor neurons. We know that each of these diseases produces its distinctive disturbances of movement because each involves a different component of the motor system.
  2. In addition, a number of common neurological illnesses, such as Huntington's, the fragile X form of mental retardation, some forms of ALS, and the early-onset form of Alzheimer's, were found to be inherited in a relatively straightforward way, implying that each of these diseases is caused by a single defective gene.
  3. Pinpointing the genes and defining the mutation that produce these diseases therefore has been relatively easy.
  4. Once a mutation is identified, it becomes possible to express the mutant gene in mice and flies and thus to discover its mechanism of pathenogenesis: how the gene gives rise to disease.

Over the last 20 years neurology has been revolutionized by the advent of molecular genetics. As a result of knowing the anatomical location, the identity, and the mechanism of action of specific genes, diagnoses of neurological disorders are no longer based solely on behavioral symptoms. We have even established new diagnostic categories with the neurological diseases such as the ion channellopathies, such as familial periodic paralysis, characterized by aberrant function of ion channel proteins, and the trinucleotide repeat disorders such as Huntington Disease and Fragile-X syndrome, where there is an abnormal and unstable replication of short repeating elements in DNA that alter the function of the resulting protein.

These new diagnostic categories are based not on symptomatology but on the dysfunction of specific genes, proteins, neuronal organelles, or neuronal systems. Moreover, molecular genetics has given us insight into the mechanisms of pathogenesis of neurological disease that did not exist 20 years ago. Thus in addition to examining patients in the office, physicians can order tests for the dysfunction of specific genes, proteins, and nerve cell components, and they can examine brain scans to see how specific regions have been affected by a disorder.

By contrast to the brilliant impact on neurology, molecular genetics has so far had only a minor impact on psychiatry. We may well ask: Why is that so?

Tracing the causes of mental illness is a much more difficult task than locating structural damage in the brain. The same four factors that have proven useful in studying neurological illnesses have been limiting in the study of mental illness.

  1. A century of postmortem studies of the brains of mentally ill persons failed to reveal the clear, localized lesions seen in neurological illness. Moreover, psychiatric illnesses are disturbances of higher mental function. The anxiety states and the various forms of depression are disorders of emotion, whereas schizophrenia is a disorder of thought. Emotion and thinking are complex mental processes mediated by complex neural circuitry. Until quite recently, little was known about the neural circuits involved in normal thought and emotion.
  2. Furthermore, although most mental illnesses have an important genetic component, they do not have straightforward inheritance patterns, because they are not caused by mutations of a single gene. Thus, there is no single gene for schizophrenia, just as there is no single gene for anxiety disorders, depression, or most other mental illnesses. Instead, the genetic components of these diseases are thought to arise from interaction with the environment of several genes, each of which exerts a relatively small effect. Together, the several genes create a genetic predisposition — a potential — for a disorder. Most psychiatric disorders are caused by a combination of these genetic predispositions and some additional, environmental factors. For example, identical twins have identical genes. If one twin has Huntington's disease, so will the other. But if one twin has schizophrenia, the other has only a 50 percent chance of developing the disease. To trigger schizophrenia, some other, nongenetic factors in early life — such as intrauterine infection, malnutrition, stress, or the sperm of an elderly father — are required. Because of this complexity in the pattern of inheritance, we have not yet identified most of the genes involved in the major mental illnesses.
  3. As a result we know little about the specific genes involved in any major mental illness.
  4. Because of points two and three, we have no satisfactory animal models for most mental disorders.

What is then needed to achieve a better biological understanding of mental illness?

Two initial requirements are essential and, in principle, obtainable within the next two decades:

  1. We need biological markers for mental illness so that we could understand the anatomical basis of these diseases and diagnose them objectively and follow their response to treatment. A beginning is evident here is the case of depression which is associated with hyperactivity in the prefrontal cortical area, Broadmann Area 25 in anxiety states where there is hyperactivity in the amygdala, and in obsessive compulsive neurosis where there is an abnormality in the striatum.
  2. We need identification of the genes for various mental illnesses, so that we can understand the molecular basis of these diseases.

These two advances would enhance our ability to understand these disorders better and recognize them earlier. But in addition, these advances would open up completely new approaches to the treatment of mental illness, an area that has been at a pharmacological standstill for depression, bipolar disorders, and schizophrenia for the last twenty years.


J. CRAIG VENTER
Genome Scientist, J. Craig Venter Institute, author A Life Decoded

DNA, WRITING THE SOFTWARE OF LIFE

In science, as with most areas, seemingly simple ideas can and have changed everything. Just one hundred and fifty years ago Charles Darwin's On the Origin of Species was published and immediately impacted science and society by describing the process of evolution as natural selection but nobody understood why or how this process happened. It took until the 1940's to establish that the substance that carried the inheritable information was DNA. In 1953 an Englishman and an American, Crick and Watson, proposed that DNA is formed as a spiraling ladder — or double helix — with the bases A-T and C-G paired (base pairs) to form the rungs, however no one knew what the code of life actually was.

In the 1960's some of the first secrets of our "genetic code" were revealed with the discovery that the chemical bases should be read in groups of three. These "nucleotide triplets" then defined and coded for amino acids.

In the late 1970's the complete genetic code (5,000 nucleotides) of a phage (a small virus that kills E. coli, a type of bacteria) was read out in sequence by a new technology developed by Fred Sanger from Cambridge. This technology named Sanger sequencing would dominate genetics for the next 25 years.

In 1995 my team read the complete genetic code of the chromosome containing all of the genetic information for a bacterium. The genome of the bacteria that we decoded was over 1.8 million nucleotides long and coded for all the proteins associated with the life of the bacteria. Based on our new methods there was an explosion of new data from decoded genomes of many living species including humans.

Just as Darwin observed evolution in the changes that he saw in various species of finches, land and sea iguanas, and tortoises, the genomics community is now studying the changes in the genetic code that are associated with human traits and disease and the differences among us by reading the genetic code of many humans and comparing them. The technology is changing rapidly where soon it will be common place for everyone to know their own genetic code. This will change the practice of medicine from treating disease after it happens to preventing disease before its onset. Understanding the mutations and variations in the genetic code will clearly help us to understand our own evolution.

Science is changing dramatically again as we use all our new tools to understand life and perhaps even to redesign it. The genetic code is the result of over 3.5 billion years of evolution and is common to all life on our planet. We have been reading the genetic code for a few decades and are gaining in how it programs for life.

In a series of experiments to better understand the genetic code, my colleagues and I developed new ways to chemically synthesize DNA in the laboratory. First we synthesized the genetic code of the same virus that Sanger and colleagues decoded in 1977. When this large synthetic molecule was inserted into a bacterium, the cellular machinery in the bacteria was not only able to read the synthetic genetic code, but the cell was also able to produce the proteins coded for by the DNA. The proteins self assembled to produce the virus particle that was then able to infect other bacteria. Over the past few years we were able to chemically make an entire bacterial chromosome, which at more than 582,000 nucleotides is the largest man-made chemical produced to date.

We have now shown that DNA is absolutely the information-coded material of life by completely transforming one species into another simply by changing the DNA in the cell. By inserting a new chromosome into a cell and eliminating the existing chromosome all the characteristics of the original species were lost and replaced by what was coded for on the new chromosome. Very soon we will be able to do the same experiment with the synthetic chromosome.

We can start with digitized genetic information and four bottles of chemicals and write new software of life to direct organisms to do processes that are desperately needed, like create renewable biofuels and recycle carbon dioxide. As we learn from 3.5 billion years of evolution we will convert billions of years into decades and change not only conceptually how we view life but life itself.


FRANK WILCZEK
Physicist, MIT; Recipient, 2004 Nobel Prize in Physics; Author, The Lightness of Being

HOMESTEADING IN HILBERT SPACE

More than a hundred years passed between Columbus' first, confused sighting of America in 1492 and the vanguard of English colonization, at Jamestown in 1607. A shorter interval separates us today from Planck's first confused sighting of the quantum world, in 1899. The quantum world is a New New World far more alien and difficult of access than Columbus' Old New World. It is also, in a real sense, much bigger. While discovery of the Old New World roughly doubled the land area available to humans, the New New World exponentially expands the dimension of physical reality.* (For example, every single electron's spin doubles it.) Our fundamental equations do not live in the three-dimensional space of classical physics, but in an (effectively) infinite-dimensional space: Hilbert space. It will take us much more than a century to homestead that New New World, even at today's much-accelerated pace.

We've managed to establish some beachheads, but the vast interior remains virgin territory, unexploited. (This time, presumably, there are no aboriginals.) Poking along the coast, we've already stumbled upon transistors, lasers, superconducting magnets, and a host of other gadgets. What's next? I don't know for sure, of course, but there are two everything-changers that seem safe bets:

• New microelectronic information processors, informed by quantum principles—perhaps based on manipulating electron spins, or on supplementing today's silicon with graphene—will enable more cycles of Moore's law, on several fronts: smaller, faster, cooler, cheaper. Supercomputers will approach and then surpass the exaflop frontier, making their capacity comparable to that of human brains. Improved bandwidth will put the Internet on steroids, allowing instant access from anywhere to all the world's information, and blurring or obliterating the experienced distinction between virtual and physical reality.

• Designer materials better able to convert energy from the hot and unwieldy quanta (photons) Sol rains upon us into more convenient forms (chemical bonds) will power a new economy of abundance. Evolution in its patient blindness managed to develop photosynthesis; with mindful insight, we will do better.

As we thus augment our intelligence and our power, a sort of bootstrap may well come into play. We—or our machines, or our hybrid descendants—will acquire the wit and strength to design and construct still better minds and engines, in an ascending spiral.

Our creative mastery over matter, through quantum theory, is still embryonic. The best is yet to come.
---
*This is established physics, independent of speculations about extra spatial dimensions (which are essentially classical).


SAM HARRIS
Neuroscientist; Chairman, The Reason Project; Author, Letter to a Christian Nation

TRUE LIE DETECTION

When evaluating the social cost of deception, one must consider all of the misdeeds—marital infidelities, Ponzi schemes, premeditated murders, terrorist atrocities, genocides, etc.—that are nurtured and shored-up, at every turn, by lies. Viewed in this wider context, deception commends itself, perhaps even above violence, as the principal enemy of human cooperation. Imagine how our world would change if, when the truth really mattered, it became impossible to lie.

The development of mind-reading technology is in its infancy, of course. But reliable lie-detection will be much easier to achieve than accurate mind reading. Whether on not we ever crack the neural code, enabling us to download a person’s private thoughts, memories, and perceptions without distortion, we will almost surely be able to determine, to a moral certainty, whether a person is representing his thoughts, memories, and perceptions honestly in conversation. Compared to many of the other hypothetical breakthroughs put forward in response to this year’s Edge question, the development of a true lie-detector would represent a very modest advance over what is currently possible through neuroimaging. Once this technology arrives, it will change (almost) everything.

The greatest transformation of our society will occur only once lie-detectors become both affordable and unobtrusive. Rather than spirit criminal defendants and hedge-fund managers off to the lab for a disconcerting hour of brain scanning, there may come a time when every courtroom or boardroom will have the requisite technology discretely concealed behind its wood paneling. Thereafter, civilized people would share a common presumption: that wherever important conversations are held, the truthfulness of all participants will be monitored. Well-intentioned people would happily pass between zones of obligatory candor, and these transitions will cease to be remarkable. Just as we’ve come to expect that many public spaces will be free of nudity, sex, loud swearing, and cigarette smoke—and now think nothing of the behavioral changes demanded of us whenever we leave the privacy of our homes—we may come to expect that certain places and occasions will require scrupulous truth-telling. Most of us will no more feel deprived of the freedom lie during a press conference or a job interview than we currently feel deprived of the freedom to remove our pants in a restaurant. Whether or not the technology works as well as we hope, the belief that it generally does work will change our culture profoundly.

In a legal context, some scholars have already begun to worry that reliable lie detection will constitute an infringement of a person’s Fifth Amendment privilege against self-incrimination. But the Fifth Amendment has already succumbed to advances in our technology. The Supreme Court has ruled that defendants can be forced to provide samples of their blood, saliva, and other physical evidence that may incriminate them. In fact, the prohibition against compelled testimony appears to be a relic of a more superstitious time: it was once widely believed that lying under oath would damn a person’s soul for eternity. I doubt whether even many fundamentalist Christians now imagine that an oath sworn on a courtroom Bible has such cosmic significance.

Of course, no technology is ever perfect. Once we have a proper lie-detector in hand, we will suffer the caprice of its positive and negative errors. Needless to say, such errors will raise real ethical and legal concerns. But some rate of error will, in the end, be judged acceptable. Remember that we currently lock people away in prison for decades—or kill them—all the while knowing that some percentage of those convicted must be innocent, while some percentage of those returned to our streets will be dangerous psychopaths guaranteed to re-offend. We have no choice but to rely upon our criminal justice system, despite the fact that judges and juries are poorly calibrated truth detectors, prone to error. Anything that can improve the performance of this ancient system, even slightly, will raise the quotient of justice in our world.

There are several reasons to doubt whether any of our current modalities of neuroimaging, like fMRI, will yield a practical form of mind-reading technology. It is also true that the physics of neuroimaging may grant only so much scope to human ingenuity. It is possible, therefore, that an era of cheap, covert lie-detection might never dawn, and we will be forced to rely upon some relentlessly costly, cumbersome technology. Even so, I think it safe to say that the time is not far off when lying, on the weightiest matters, will become a practical impossibility. This fact will be widely publicized, of course, and the relevant technology will be expected to be in place, or accessible, whenever the stakes are high. This very assurance, rather than the incessant use of these machines, will make all the difference.


DAVID M. BUSS
Psychologist, University of Texas at Austin; Author of The Murderer Next Door

EXPLOITABILITY

Game-changing scientific breakthroughs will come with the discovery of evolved psychological circuits for exploiting other humans—through cheating, free-riding, mugging, robbing, sexually deceiving, sexually assaulting, physically abusing, cuckolding, mate poaching, stalking, and murdering.  Scientists will discover that these exploitative resource acquisition adaptations contain specific design features that monitor statistically reliable cues to exploitable victims and opportunities. 

Convicted muggers who are shown videotapes of people walking down a New York City street show strong consensus about who they would choose as a mugging victim.  Chosen victims emit nonverbal cues, such as an uncoordinated gait or a stride too short or long for their height, indicative of ease of victimization.  These potential victims are high on muggability. Similarly, short stride length, shyness, and physical attractiveness provide reliable cues to sexual assaultability.  Future scientific breakthroughs will identify the psychological circuits of exploiters sensitive to victims who give off cues to cheatability, deceivability, rapeability, abusability, mate poachability, cuckoldability, stalkability, and killability; and to groups that emanate cues to free-ridability and vanquishability.

This knowledge will offer the potential for developing novel defenses that reduce cheating, mugging, raping, robbing, stalking, mate poaching, murdering, and warfare.  On the other hand, because adaptations for exploitation co-evolve in response to defenses against exploitation, selection may favor the evolution of additional adaptations that circumvent these defenses.

Because evolution by selection is a relatively slow process, the acquisition of scientific knowledge about adaptations for exploitation may enable staying one step ahead of exploiters, and effectively short-circuit their strategies.  Some classes of crime will be curtailed. Cultural evolution, however, being fleeter than organic evolution, may enable the rapid circumvention of anti-exploitation defenses.  Defenses, in turn, favor novel strategies of exploitation.  Dissemination of discoveries about adaptations for exploitation and co-evolved defenses may change permanently the nature of social interaction.  Or perhaps, like some co-evolutionary arms races, these discoveries ultimately may change nothing at all.


IAN WILMUT
Chair of Reproductive Biology, Director Scottish Centre for Regenerative Medicine; Author, After Dolly

THE NEXT STEP IN HUMAN HEALTHCARE?

In 2009 we are still comparatively near to the beginning of an era in which biomedical research is revolutionizing our understanding of inherited human diseases and providing the first effective treatment for at least some of them. This new knowledge will offer benefits that are at least as great as those from past biomedical research which has dramatically reduced the devastating effects of many infectious diseases. The powerful new tools that will bring this about are those for molecular genetic analysis and stem cell biology.

Human health and lifespan in the more fortunate parts of the world has improved dramatically in the past 1,000 years, but in the main this is because we became better at meeting the everyday needs for survival. Over this period humans became more effective at collecting or producing adequate supplies of food. On this timescale it was only comparatively recently that communities recognized the need for clean water and effective sanitation to prevent infection. More recently still, methods have been developed for immunization against potential infection and compounds identified as being powerful antibiotics. While the authors of these essays, and the vast majority of those who read them, can take all of these benefits for-granted it is a sad commentary on us all that this is not true for many millions in the less fortunate parts of the world, but that is another matter.

The coming together of emerging techniques in cell and molecular biology will change our entire approach to human diseases that are inherited, rather than acquired from an infective agent, such as a virus or bacterium. “Inherited diseases” are those in which run in families because of errors in the DNA sequence of some family members. For the sake of simplicity, this essay will concentrate upon diseases inherited through chromosomal DNA, while acknowledging that there is DNA in mitochondria which is also error prone and the cause of other inherited diseases.

While the proportion of diseases for which the precise genetic cause has been identified is increasing because of the power of modern molecular analysis, it is still small. Even more important, is the fact that, the way in which the genetic error causes the symptoms of the disease is known in very few cases. This has been a major limiting factor in the development of effective treatments because the objective with present treatments is not to correct the error in the DNA, but rather to prevent the development of the symptoms.

One advantage of the new tools is that it is not necessary to have identified the genetic error to be able to identify compounds that can prevent the development of symptoms. This new opportunity arises from the revolutionary new technique, by which stem cells able to form all tissues of the body are formed from cells taken from adults. Shinya Yamanaka of the University of Kyoto was the first to show that a simple procedure could achieve this extraordinary change and named the cells “induced pluripotent cells” in view of their ability for form all tissues.

Many laboratories are now using induced pluripotent stem cells to study inherited diseases, such as ALS. Pluripotent cells from ALS patients are turned into the different neural populations affected by the disease and contrasted with the same cell population from healthy donors. Discovery of the molecular cause of the diseases will involve analyses of gene function in the diseased cells in many ways. There is then the practical issue of devising a test to discover if potential drugs are able to prevent the development of the disease symptoms. This can be used as the basis of tests that can be carried out by robots able to screen thousands of compounds every week. Many further studies will then be required before any new medicine can be used to treat patients.

In addition to the prospect of understanding and being able to treat inherited diseases it is also likely that these therapies will be effective in treating related cases for which there is no evidence of a genetic cause. In the case of ALS it is estimated that less than 10% of cases are inherited. ALS should be considered as a family of diseases because it reflects errors in several different genes. Recent studies have revealed an unusual distribution of a particular protein within the cells of many patients. This was the case in inherited cases associated with all except one of these genes and in addition occurred in several patients for which there was no evidence of an inherited effect. While this pattern may not occur with all inherited diseases the observation lends encouragement to the hope that treatments developed through research with inherited cases will often be equally effective for the cases in which there is no genetic effect.

While I have separated infectious and inherited diseases, in reality there is a considerable overlap. New understanding of the molecular and cellular mechanisms that govern normal development and health will also provide the basis for novel treatments for infectious diseases. In this way for example, an understanding of the development and function of the immune system may reveal new approaches to the treatment of diseases such as HIV.

It is always impossible to predict the future and scientists above many others should know to expect the unexpected. Sadly this leads one to be cautious and fear that it will not be possible to develop effective treatments for some diseases, but it also suggests that there will also be joyous surprises in store. I certainly find it very exciting indeed to think that in my lifetime effective treatments will be available for some of the many hundred inherited diseases. The devastating effect of these diseases on the patients and their families will be greatly reduced or even removed, in just the same way that earlier research banished infections such as polio, TB and the childhood diseases such as measles or mumps.


| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |

next >