| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |

next >


KEVIN KELLY
Editor-At-Large, Wired; Author, New Rules for the New Economy

A NEW KIND OF MIND

It is hard to imagine anything that would "change everything" as much as a cheap, powerful, ubiquitous artificial intelligence—the kind of synthetic mind that learns and improves itself. A very small amount of real intelligence embedded into an existing process would boost its effectiveness to another level. We could apply mindfulness wherever we now apply electricity. The ensuing change would be hundreds of times more disruptive to our lives than even the transforming power of electrification. We'd use artificial intelligence the same way we've exploited previous powers—by wasting it on seemingly silly things. Of course we'd plan to apply AI to tough research problems like curing cancer, or solving intractable math problems, but the real disruption will come from inserting wily mindfulness into vending machines, our shoes, books, tax returns, automobiles, email, and pulse meters.

This additional intelligence need not be super-human, or even human-like at all. In fact, the greatest benefit of an artificial intelligence would come from a mind that thought differently than humans, since we already have plenty of those around. The game-changer is neither how smart this AI is, nor its variety, but how ubiquitous it is. Alan Kay quips in that humans perspective is worth 80 IQ points. For an artificial intelligence, ubiquity is worth 80 IQ points. A distributed AI, embedded everywhere that electricity goes, becomes ai—a low-level background intelligence that permeates the technium, and trough this saturation morphs it.

Ideally this additional intelligence should not be just cheap, but free. A free ai, like the free commons of the web, would feed commerce and science like no other force I can imagine, and would pay for itself in no time. Until recently, conventional wisdom held that supercomputers would first host this artificial mind, and then perhaps we'd get mini-ones at home, or add them to the heads of our personal robots. They would be bounded entities. We would know where our thoughts ended and theirs began.

However, the snowballing success of Google this past decade suggests the coming AI will not be bounded inside a definable device. It will be on the web, like the web. The more people that use the web, the more it learns. The more it knows, the more we use it. The smarter it gets, the more money it makes, the smarter it will get, the more we will use it. The smartness of the web is on an increasing-returns curve, self-accelerating each time someone clicks on a link or creates a link. Instead of dozens of geniuses trying to program an AI in a university lab, there are billion people training the dim glimmers of intelligence arising between the quadrillion hyperlinks on the web. Long before the computing capacity of a plug-in computer overtakes the supposed computing capacity of a human brain, the web—encompassing all its connected computing chips—will dwarf the brain. In fact it already has.

As more commercial life, science work, and daily play of humanity moves onto the web, the potential and benefits of a web AI compound. The first genuine AI will most likely not be birthed in standalone supercomputer, but in the superorganism of a billion CPUs known as the web. It will be planetary in dimensions, but thin, embedded, and loosely connected. Any device that touches this web AI will share —and contribute to—its intelligence. Therefore all devices and processes will (need to) participate in this web intelligence.

Standalone minds are likely to be viewed as handicapped, a penalty one might pay in order to have mobility in distant places. A truly off-the-grid AI could not learn as fast, as broadly, or as smartly as one plugged into 6 billion human minds, a quintillion online transistors, hundreds of exabytes of real-life data, and the self-correcting feedback loops of the entire civilization.

When this emerging AI, or ai, arrives it won't even be recognized as intelligence at first. Its very ubiquity will hide it. We'll use its growing smartness for all kinds of humdrum chores, including scientific measurements and modeling, but because the smartness lives on thin bits of code spread across the globe in windowless boring warehouses, and it lacks a unified body, it will be faceless. You can reach this distributed intelligence in a million ways, through any digital screen anywhere on earth, so it will be hard to say where it is. And because this synthetic intelligence is a combination of human intelligence (all past human learning, all current humans online) and the coveted zip of fast alien digital memory, it will be difficult to pinpoint what it is as well. Is it our memory, or a consensual agreement? Are we searching it, or is it searching us?

While we will waste the web's ai on trivial pursuits and random acts of entertainment, we'll also use its new kind of intelligence for science. Most importantly, an embedded ai will change how we do science. Really intelligent instruments will speed and alter our measurements; really huge sets of constant real time data will speed and alter our model making; really smart documents will speed and alter our acceptance of when we "know" something. The scientific method is a way of knowing, but it has been based on how humans know. Once we add a new kind of intelligence into this method, it will have to know differently. At that point everything changes.


HOWARD GARDNER
Psychologist, Harvard Graduate School of Education; Author, Five Minds for the Future

CRACKING OPEN THE LOCKBOX OF TALENT

What is talent? If you ask the average grade school teacher to identify her most talented student, she is likely to reject the question: "All my students are equally talented." But of course, this answer is rubbish. Anyone who has worked with numerous young people over the years knows that some catch on quickly, almost instantly, to new skills or understandings, while others must go through the same drill, with little depressingly little improvement over time.

As wrongheaded as the teacher's response is the viewpoint put forward by some psychological researchers, and most recently popularized in Malcolm Gladwell's Outliers: The Story of Success. This is notion that there is nothing mysterious about talent, no need to crack open the lockbox: anyone who works hard enough over a long period of time can end up at the top of her field. Anyone who has the opportunity to observe or read about a prodigy—be it Mozart or Yo-Yo Ma in music, Tiger Woods in golf, John von Neumann in mathematics—knows that achievement is not just hard work: the differences between performance at time 1 and successive performances at times 2, 3, and 4 are vast, not simply the result of additional sweat. It is said that if algebra had not already existed,, precocious Saul Kripke would have invented it in elementary school: such a characterization would be ludicrous if applied to most individuals.

For the first time, it should be possible to delineate the nature of talent. This breakthrough will come about through a combination of findings from genetics (do highly talented individuals have a distinctive, recognizable genetic profile?); neuroscience (are there structural or functional neural signatures, and, importantly, can these be recognized early in life?); cognitive psychology (are the mental representations of talented individuals distinctive when contrasted to those of hard workers); and the psychology of motivation (why are talented individuals often characterized as having 'a rage to learn, a passion to master?)

This interdisciplinary scientific breakthrough will allow us to understand what is special about Picasso, Gauss, J.S. Mill. Importantly, it will illuminate whether a talented person could have achieved equally in different domains (could Mozart have been a great physicist? Could Newton have been a great musician?) Note, however, that will not illuminate two other issues:

1.    What makes someone original, creative? Talent, expertise,
       are necessary but not sufficient.
2.    What determines whether talents are applied to constructive
       or destructive ends?

These answers are likely to come from historical or cultural case studies, rather than from biological or psychological science. Part of the maturity of the sciences is an appreciation of which questions are best left to other disciplinary approaches.


TIMOTHY TAYLOR
Archaeologist, University of Bradford; Author, The Buried Soul

CULTURE

Culture changes everything because culture contains everything, in the sense of things that can be named, and so what can be conceived. Wittgenstein implied that what cannot be said cannot be thought. He meant by this that language relies on a series of prior agreements. Such grammar has been shown by anthropologists to underpin the idea of any on-going community, not just its language, but its broader categories, its institutions, its metaphysics. And the same paradox is presented: how can anything new ever happen? If by 'happen' we only think of personal and historical events, we miss the most crucial novelty—the way that new things, new physical objects, devices and techniques, insinuate themselves into our lives. They have new names which we must learn, and new, revolutionary effects.

It does not always work like that. Resistance is common. Paradoxically, the creative force of culture also tries to keep everything the same. Ernest Gellner said that humans, taken as a whole, present the most extensive behavioural variation of any species while every particular cultural community is characterized by powerful norms. These are ways of being that, often through appeals to some apparently natural order, are not just mildly claimed as quintessentially human, but lethally enforced at a local level, in a variety of more or less public ways. Out groups (whether a different ethnicity, class, sexuality, creed, whether being one of twins, an albino, someone disabled or an unusually talented individual) are suspect and challenging in their abnormality. Categories of special difference are typical foci for sacrifice, banishment, and ridicule through which the in-group becomes not just the in-group but, indeed, a distinctly perceptible group, confident, refreshed and culturally reproductive. This makes some sense: aberrance subverts the grammar of culture.

The level at which change can be tolerated varies greatly across social formations, but there is always a point beyond which things become intolerably incoherent. We may rightly label the most unprecedented behaviour mad because, whatever relativization might be invoked to explain it, it is, by definition, strategically doomed: we seek to ignore it. Yet the routine expulsion of difference, apparently critical in the here and now, becomes maladaptive in any longer-term perspective. Clearly, it is change that has created our species' resilience and success, creating the vast inter- (not intra-) cultural diversity that Gellner noted. So how does change happen?

Major change often comes stealthily. Its revolutionary effect may often reside in the very fact that we do not recognize what it is doing to our behaviour, and so cannot resist it. Often we lack to words to articulate resistance as the invention is a new noun whose verbal effect lags in its wake. Such major change operates far more effectively through things than directly through people, not brought about by the mad, but rather by 'mad scientists', whose inventions can be forgiven their inventors.

Unsurprisingly then, the societies that tolerate the least behavioural deviance are the most science-averse. Science, in the broadest sense of effective material invention, challenges quotidian existence. The Amish (a quaint static ripple whose way of life will never uncover the simplest new technological fix for the unfolding hazards of a dynamic universe) have long recognized that material culture embodies weird inspirations, challenging us, as eventual consumers, not with 'copy what I do', but a far, far more subversive 'try me.'

Material culture is the thing that makes us human, driving human evolution from the outset with its continually modifying power. Our species' particular dilemma is that in order to safeguard what we have, we have continually to change. The culture of things—invention and technology—is ever changing under the tide of words and routines whose role is to image fixity and agreement when, in reality, none exists. This form of change is no trivial thing because it is essential to our longer term survival. At least, the longer term survival of anything we may be proud to call universally human


JOHN GOTTMAN
Psychologist; Founder of Gottman Institute; Author (with Julie Gottman), And Baby Makes Three

LABORATORY EARTH COLONIES

The technological changes were small at first. In 2007 a telescope was developed that could search for planets in the Milky Way within 100 light years of Earth. The next version of the telescope in 2008 did not have to block out the light of the new star to see the planets. It could directly see the reflected light of the planets closest to every star. That made it possible to do spectroscopic analysis of reflected light and search for blue planets like Earth. Within a decade, 100 Earth-like planets had been identified within 100 light years. In the next two centuries that number increased to 50,000 blue planets.

Within the next two centuries the seemingly impossible technical problems of space travel began to be solved. Problems of foil sails were solved. Designs emerged for ships that could get up to 85% of the speed of light within 2 years, using acceleration from starts and from harnessing the creative energy of empty space itself. The Moon, Europa and Mars were colonized. Terra-forming technologies developed. Many designs emerged for the spinning complete 2-mile Earth-habitat ship that produced a 1-g environment. Thousands of people wanted to make the trips.

Laboratory Earth colonies were formed for simulating conditions for the galactic trips. Based on these experiments, social scientists soon recognized that the major unsolved problem of galactic colonization was the social psychological problem, How could humans live together for up to 52 years, raising children who would become the explorers of the blue planets? Much had been learned, of course, from the social psychological studies early in the 21st and 22nd Centuries for obtaining planet-wide cooperation in solving global warming and sustainable energy production, and in curing world-wide hunger and disease. But that work was primitive and rudimentary for the challenges of galactic colonization.

The subsequent classic social psychological studies were all funded privately by one man. Thousands of scientists participated. Studies of all kinds were initially devised, and the results were carefully replicated. The entire series of social psychological experiments took a century to perform. It rapidly became clear that a military or any hierarchical social structure could not last without the threats of continual external danger. The work of Peggy Sanday had demonstrated that fact without question. The problem was to foster creative collaboration and minimize self-interest. Eventually, it was deemed necessary for each ship to spend 5 years prior to the trip selecting a problem that all the members would creatively and cooperatively face. The work had to easily consume the crew of a ship for 60 years. In addition, each ship represented a microcosm of all Earth's activities, including all the occupations and professions, adventure, play, and sports.

In the year 2,500 more than 20,000 ships set out, 2 headed for each planet. It was inevitable that many ships would successfully make the journey. No one knew what they would find. There was no plan for communication between the stars. The colonization of the Milky Way had begun.


ED REGIS
Science Writer, Author, What Is Life?

MOLECULAR MANUFACTURING

Nothing has a greater potential for changing everything than the successful implementation of good old-fashioned nanotechnology.

I specify the old-fashioned version because nanotechnology is decidedly no longer what it used to be. Back in the mid-1980s when Eric Drexler first popularized the concept in his book Engines of Creation, the term referred to a radical and grandiose molecular manufacturing scheme. The idea was that scientists and engineers would construct vast fleets of "assemblers," molecular-scale, programmable devices that would build objects of practically any arbitrary size and complexity, from the molecules up. Program the assemblers to put together an SUV, a sailboat, or a spacecraft, and they'd do it—automatically, and without human aid or intervention. Further, they'd do it using cheap, readily-available feedstock molecules as raw materials.

The idea sounds fatuous in the extreme…until you remember that objects as big and complex as whales, dinosaurs, and sumo wrestlers got built in a moderately analogous fashion: they began as minute, nanoscale structures that duplicated themselves, and whose successors then differentiated off into specialized organs and other components. Those growing ranks of biological marvels did all this repeatedly until, eventually, they had automatically assembled themselves into complex and functional macroscale entities. And the initial seed structures, the gametes, were not even designed, built, or programmed by scientists: they were just out there in the world, products of natural selection. But if nature can do that all by itself, then why can't machines be intelligently engineered to accomplish relevantly similar feats?

Latter-day "nanotechnology," by contrast, is nothing so imposing. In fact, the term has been co-opted, corrupted, and reduced to the point where what it refers to is essentially just small-particle chemistry. And so now we have "nano-particles" in products raging from motor oils to sunscreens, lipstick, car polish and ski wax, and even a $420 "Nano Gold Energizing Cream" that its manufacturer claims transports beneficial compounds into the skin. Nanotechnology in this bastardized sense is largely a marketing gimmick, not likely to change anything very much, much less "everything."

But what if nanotechnology in the radical and grandiose sense actually became possible? What if, indeed, it became an operational reality? That would be a fundamentally transformative development, changing forever how manufacturing is done and how the world works. Imagine all of our material needs being produced at trivial cost, without human labor, and with no waste. No more sweat shops, no more smoke-belching factories, no more grinding workdays or long commutes. The magical molecular assemblers will do it all, permanently eliminating poverty in the process.

Then there would be the medical miracles performed by other types of molecular-scale devices that would repair or rejuvenate your body's cells, killing the cancerous or other bad ones, and nudging the rest of them toward unprecedented levels of youth, health, and durability. All without $420 bottles of face cream.

There's a downside to all this, of course, and it has nothing to do with Michael Chrichton-ish swarms of uncontrolled, predatory nanobots hunting down people and animals. Rather, it has to do with the question of what the mass of men and women are going to do when, newly unchained from their jobs, and blessed or cursed with longer life spans, they have oceans of free time to kill. Free time is not a problem for the geniuses and creators. But for the rest of us, what will occupy our idle hands? There is only so much golf you can play.

But perhaps this is a problem that will never have to be faced. The bulk of mainstream scientists pay little attention to radical nanotechnology, regarding its more extravagant claims as science-fictional and beyond belief. Before he died, chemist Richard Smalley, a Nobel prizewinner, made a cottage industry out of arguing that insurmountable technical difficulties at the chemical bonding level would keep radical nanotechnology perpetually in the pipe dream stage. Nobody knows whether he was right about that.

Some people may hope that he was. Maybe changing everything is not so attractive an idea as it seems at first glance.


DOUGLAS RUSHKOFF
Media Analyst; Documentary Writer; Author, Get Back in the Box

THE DISCOVERY OF INTELLIGENT LIFE FROM SOMEWHERE ELSE

We're talking about changing everything—not just our abilities, relationships, politics, economy, religion, biology, language, mathematics, history and future, but all of these things at once. The only single event I can see shifting pretty much everything at once is our first encounter with intelligent, extra-terrestrial life.

The development of any of our current capabilities—genetics, computing, language, even compassion—all feel like incremental advances in existing abilities. As we've seen before, the culmination of one branch of inquiry always just opens the door to a new a new branch, and never yields the wholesale change of state we anticipated. Nothing we've done in the past couple of hundred thousand years has truly changed everything, so I don't see us doing anything in the future that would change everything, either.

No, I have the feeling that the only way to change everything is for something be done to us, instead. Just imagining the encounter of humanity with an "other" implies a shift beyond the solipsism that has characterized our civilization since our civilization was born. It augurs a reversal as big as the encounter of an individual with its offspring, or a creature with its creator. Even if it's the result of something we've done, it's now independent of us and our efforts.

To meet a neighbor, whether outer, inner, cyber- or hyper- spatial, finally turns us into an "us." To encounter an other, whether a god, a ghost, a biological sibling, an independently evolved life form, or an emergent intelligence of our own creation, changes what it means to be human.

Our computers may never inform us that they are self-aware, extra-terrestrials may never broadcast a signal to our SETI dishes, and interdimensional creatures may never appear to those who aren't taking psychedelics at the time—but if any of them did, it would change everything.


JUAN ENRIQUEZ
CEO, Biotechonomy; was Founding Director, Harvard Business School's Life Sciences Project; Author, The Untied States of America

HOMO EVOLUTIS

Speciation is coming. Fast. We keep forgetting that we are but one of several hominids that have walked the Earth (erectus, habilis,neanderthalis, heidelbergensis, ergaster, australopithecus). We keep thinking we are the one and only, the special. But we easily could not have been a dominant species. Or even a species anymore. We blissfully ignore the fact that we came within about 2,000 specimens of going extinct (which is why human DNA is virtually identical).

There is not much evidence, historically, that we are the be all and end all, or that we will remain the dominant species. The fossil history of the planet tells tales of at least six mass extinctions. In each cycle, most life was toast as DNA/RNA hit a reboot key. New species emerged to adapt to new conditions. Asteroid hits? Do away with oceans of slime. World freezes to the Equator? Microbes dominate. Atmosphere fills with poisonous oxygen? no worries, life eventually blurts out obnoxious mammals.

Unless we believe that we have now stabilized all planetary and galactic variables, these cycles of growth and extinction will continue time and again. 99% of species, including all other hominids, have gone extinct. Often this has happened over long periods of time. What is interesting today, 200 years after Darwin's birth, is that we are taking direct and deliberate control over the evolution of many, many species, including ourselves. So the single biggest game changer will likely be the beginning of human speciation. We will begin to get glimpses of it in our lifetime. Our grandchildren will likely live it.

There are at least three parallel tracks on which this change is running towards us. The easiest to see and comprehend is taking place among the "handicapped." As we build better prostheses, we begin to see equality. Legless Oscar Pistorious attempting to put aside the Special Olympics and run against able bodied Olympians is but one example. In Beijing he came very close, but did not meet the qualifying times. However, as materials science, engineering, and design advance, by next Olympics he and his disciples will be competitive. And one Olympics after that the "handicapped" could be unbeatable.

It's not just limbs, what started out as large cones for the hard of hearing eventually became pesky, malfunctioning hearing aids. Then came discrete, effective, miniaturized buds. Now internally implanted cochlear implants allow the deaf to hear. But unlike natural evolution, which requires centuries, digital technologies double in power and halve in price every few months. Soon those with implants will hear as well as we do, and, a few months after that, their hearing may be more acute than ours. Likely the devices will span a broad and adjustable tonal range, including that of species like dogs, bats, or dolphins. Wearers will be able to adapt to various environments at will. Perhaps those with natural hearing will file discrimination lawsuits because they were not hired by symphony orchestras…

Speciation does not have to be mechanical, there is a second parallel, fast moving, track in stem cell and tissue engineering. While the global economy melted down this year, a series of extraordinary discoveries opened interesting options that will be remembered far longer that the current NASDAQ index. Labs in Japan and Wisconsin rebooted skin cells and turned them into stem cells. We are now closer to a point where any cell in our body can be rebooted back to its original factory settings (pluripotent stem cell) and can rebuild any part of our body. At the same time, a Harvard team stripped a mouse heart of all its cells, leaving only cartilage. The cartilage was covered in mouse stem cells, which self organized into a beating heart. A Wake Forest group was regrowing human bladders and implanting them into accident and cancer victims. By year end, a European team had taken a trachea from a dead donor, taken the cells off, and then covered the sinew with bone marrow cells taken from a patient dying of tuberculosis. These cells self organized and regrew a fully functional trachea which was implanted into the patient. There was no need for immunosuppressants; her body recognized the cells covering the new organ as her own…

Again, this is an instance where treating the sick and the needy can quickly expand into a "normal" population with elective procedures. The global proliferation of plastic surgery shows how many are willing to undergo great expense, pain, and inconvenience to enhance their bodies. Between 1996 and 2002 elective cosmetic surgery increased 297%, minimally invasive procedures increased 4146%. As artificial limbs, eyes, ears, cartilage begin to provide significant advantages, procedures developed to enhance the quality of life for the handicapped may become common.

After the daughter of one of my friends tore her tendons horseback riding, doctors told her they would have to harvest parts of her own tendons and hamstrings to rebuild her leg. Because she was so young, the crippling procedure would have to be repeated three times as her body grew. But her parents knew tissue engineers were growing tendons in a lab, so she was one of the first recipients of a procedure that allows natural growth and no harvesting. Today she is a successful ski racer, but her coach feels her "damaged" knee is far stronger and has asked whether the same procedure could be done on the undamaged knee…

As we regrow or engineer more body parts we will likely significantly increase average life span and run into a third track of speciation. Those with access to Google already have an extraordinary evolutionary advantage over the digitally illiterate. Next decade we will be able to store everything we see, read, and hear in our lifetime. The question is can we re-upload and upgrade this data as the basic storage organ deteriorates? And can we enhance this organ's cognitive capacity internally and externally? MIT has already brought together many of those interested in cognition—neuroscientists, surgeons, radiologists, psychologists, psychiatrists, computer scientists—to begin to understand this black box. But rebooting other body parts will likely be easier than rebooting the brain, so this will likely be the slowest track but, over the long term, the one with the greatest speciation impact.

Speciation will not be a deliberate, programmed event. Instead it will involve an ever faster accumulation of small, useful improvements that eventually turn homo sapiens into a new hominid. We will likely see glimpses of this long-lived, partly mechanical, partly regrown creature that continues to rapidly drive its own evolution. As the branches of the tree of life, and of hominids, continue to grow and spread, many of our grandchildren will likely engineer themselves into what we would consider a new species, one with extraordinary capabilities, a homo evolutis.


ROGER C. SCHANK
Psychologist & Computer Scientist; Engines for Education Inc.; Author, Making Minds Less Well Educated Than Our Own

WISDOM REBORN

An executive of a consumer products company who I know was worrying about how to make the bleach his company produces better. He thought it would be nice if the bleach didn't cause "collateral damage." That is, he wanted it to harm bad stuff without harming good stuff. He seized upon the notion of collateral damage and began to wonder where else collateral damage was a problem. Chemotherapy came to mind and he visited some oncologists who gave him some ideas about what they did to make chemotherapy less harmful tp patients. He then applied those same ideas to improve his company's bleach.

He began to wonder about what he had done and how he had done it. He wanted to be able to do this sort of thing again. But what is this sort of thing and how can one do it again?

In bygone days we lived in groups that had wise men (and women) who told stories to younger people if they thought that those stories might be relevant to their needs. This was called wisdom and teaching and it served as way of passing one generation's experiences to the next.

We have lost this ability to some extent because we live in a much larger world, where the experts are not likely to be in the next cave over and where there is a lot more to have expertise about. Nevertheless, we, as humans, are set up to deliver and make use of just in time wisdom. We just aren't that sure where to find it. We have created books, and schools, and now search engines to replace what we have lost. Still it would be nice if there was wisdom to be had without having to look hard to find it.

Those days of just in time storytelling will return. The storyteller will be your computer. The computers we have today are capable of understanding your needs and finding just the right (previously archived and indexed) wise man (or woman) to tell you a story, just when you need it, that will help you think something out. Some work needs to be done to make this happen of course.

No more looking for information. No more libraries. No more key words. No more search engines.

Information will find you, and just in the nick of time. And this will "change everything."

You are seeing the beginning of this today, but it is being done in a mindless and commercial way, led of course by Google ads that watch the words you type and match them to ads they have written that contain those words. (I receive endless offers of on line degrees, for example, because that is what I often write about.) Three things will change:

1. The information that finds you will be relevant and important to what you are working on and will arrive just in time.
2. The size of information will change. No more books-worth amount of information (book size is an artifact of what length books sells—there are no ten page books.)
3. A new form of publishing will arrive that serves to vet the information you receive. Experts will be interviewed and their best stories will be indexed. Those stories will live forever waiting for someone to tell them to at the right moment.

In the world that I am describing the computer has to know what you are trying to accomplish, not what words you just typed, and it needs to have an enormous archive of stories to tell you. Additionally it needs to have indexed all the stories it has in its archives to activities you are working on in such a way that the right story comes up at the right time.

What needs to happen to make this a reality? Computers need an activity model. They need to know what you are doing and why. As software becomes more complex and more responsible for what we do in our daily lives, this state of affairs is inevitable.

An archive needs to be created that has all the wisdom of the world in it. People have sought to do this for years, in the form of encyclopedias and such, but they have failed to do what was necessary to make those encyclopedias useful. There is too much in a typical encyclopedia entry, not to mention the absurd amount of information in a book. People are set up to hear stories and stories don't last all that long before we lose our ability to concentrate on their main point, their inherent wisdom, if you will. People tell each other stores all the time, but when they write or lecture they are permitted (or encouraged) to go on way too long (as I am doing now.)

Wisdom depends upon goal directed prompts that say what to do when certain conditions are encountered. To put this another way, an archive of key strategic ideas about how to achieve goals under certain conditions is just the right resource to be interacting with enabling a good story to pop up when you need it. The solution involves goal-directed indexing. Ideas such a "collateral damage" are indices to knowledge. We are not far from the point where computers will be able to recognize collateral damage when it happens and find other examples that help you think something out.

Having a "reminding machine" that gets reminding of universal wisdom as needed will indeed change everything. We will all become much more likely to profit from humanity's collective wisdom by having a computer at the ready to help us think.


STUART KAUFFMAN
Director, The Institute for Biocomplexity and Informatics, The University of Calgary; Author, Reinventing the Sacred

THE OPEN UNIVERSE

John Brockman's question is dramatic: What will change everything? Of course, no one knows. But the fact that no one knows may be the feature of our lives and the universe that does change everything. Reductionism has reigned as our dominant world view for 350 years in Western society. Physicist Steven Weinberg states that when the science shall have been done, all the explanatory arrows will point downward, from societies to people, to organs, to cells, to biochemistry, to chemistry and ultimately to physics and the final theory.

I think he is wrong: the evolution of the biosphere, the economy, our human culture and perhaps aspects of the abiotic world, stand partially free of physical law and are not entailed by fundamental physics. The universe is open.

Many physicists now doubt the adequacy of reductionism, including Philip Anderson, and Robert Laughlin. Laughlin argues for laws of organization that need not derive from the fundamental laws of physics. I give one example. Consider a sufficiently diverse collection of molecular species, such as peptides, RNA, or small molecules, that can undergo reactions and are also candidates to catalyze those very reactions. It can be shown analytically that at a sufficient diversity of molecular species and reactions, so many of these reactions are expected to be catalyzed by members of the system that a giant catalyzed reaction network arises that is collectively autocatalytic. It reproduces itself.

The central point about the autocatalytic set theory is that it is a mathematical theory, not reducible to the laws of physics, even if any specific instantiation of it requires actual physical "stuff". It is a law of organization that may play a role in the origin of life.

Consider next the number of proteins with 200 amino acids: 20 to the 200th power. Were the 10 to the 80th particles in the known universe doing nothing but making proteins length 200 on the Planck time scale, and the universe is some 10 to the 17th seconds old, it would require 10 to the 39th lifetimes of the universe to make all possible proteins length 200 just once. But this means that, above the level of atoms, the universe is on a unique trajectory. It is vastly non-ergodic. Then we will never make all complex molecules, organs, organisms, or social systems.

In this second sense, the universe is indefinitely open "upward" in complexity.

Consider the human heart, which evolved in the non-ergodic universe. I claim the physicist can neither deduce nor simulate the evolutionary becoming of the heart. Simulation, given all the quantum throws of the dice, for example cosmic rays from somewhere mutating genes, seems out of the question. And were such infinitely or vastly many simulations carried out there would be no way to confirm which one captured the evolution of this biosphere.

Suppose we asked Darwin the function of the heart. "Pumping blood" is his brief reply. But there is more. Darwin noted that features of an organism of no selective use in the current environment might be selected in a different environment. These are called Darwinian "preadaptations" or "exaptations". Here is an example: Some fish have swim bladders, partially filled with air and partially with water, that adjust neutral bouyancy in the water column. They arose from lung fish. Water got into the lungs of some fish, and now there was a sac partially filled with air, partially filled with water, poised to become a swim bladder. Three questions arise: Did a new function arise in the biosphere? Yes, neutral bouyancy in the water column. Did it have cascading consequences for the evolution of the biosphere? Yes, new species, proteins and so forth.

Now comes the essential third question: Do you think you could say ahead of time all the possible Darwinian preadaptations of all organisms alive now, or just for humans? We all seem to agree that the answer is a clear "No". Pause. We cannot say ahead of time what the possible preadaptations are. As in the first paragraph, we really do not know what will happen. Part of the problem seems to be that we cannot prespecify all possible selective environments. How would we know we had succeeded? Nor can we prespecify the feature(s) of one or several organisms that might become preadaptations.

Then we can make no probability statement about such preadaptations: We do not know the space of possibilities, the sample space, so can construct no probability measure.

Can we have a natural law that describes the evolution of the swim bladder? If a natural law is a compact description available beforehand, the answer seems a clear No. But then it is not true that the unfolding of the universe is entirely describable by natural law. This contradicts our views since Descartes, Galileo and Newton. The unfolding of the universe seems to be partially lawless. In its place is a radically creative becoming.

Let me point to the Adjacent Possible of the biosphere. Once there were lung fish, swim bladders were in the Adjacent Possible of the biosphere. Before there were multicelled organisms, the swim bladder was not in the Adjacent Possible of the biosphere. Something wonderful is happening right in front of us: When the swim bladder arose it was of selective advantage in its context. It changed what was Actual in the biosphere, which in turn created a new Adjacent Possible of the biosphere. The biosphere self consistently co-constructs itself into its every changing, unstatable Adjacent Possible.

If the becoming of the swim bladder is partially lawless, it certainly is not entailed by the fundamental laws of physics, so cannot be deduced from physics. Then its existence in the non-ergodic universe requires an explanation that cannot be had by that missing entailment. The universe is open.

Part of the explanation rests in the fact that life seems to be evolving ever more positive sum games. As organismic diversity increases, and the "features" per organism increase, there are more ways for selection to select for mutualisms that become the conditions of joint existence in the universe. The humming bird, sticking her beak in the flower for nectar, rubs pollen off the flower, flies to a next flower for nectar, and pollen rubs off on the stamen of the next flower, pollinating the flower. But these mutualistic features are the very conditions of one another's existence in the open universe. The biosphere is rife with mutualisms. In biologist Scott Gilbert's fine phrase, these are codependent origination—an ancient Buddhist phrase. In this open universe, beyond entailment by fundamental physics, we have partial lawlessness, ceaseless creativity, and forever co-dependent origination that changes the Actual and the ever new Adjacent Possible we ceaselessly self-consistently co-construct. More, the way this unfolds is neither fully lawful, nor is it random. We need to re-envision ourselves and the universe.


KARL SABBAGH
Writer and Television Producer; Author, The Riemann Hypothesis

A FAREWELL TO HARM

Much of the misery in the world today — as it always has been — is due to the human propensity to contemplate, or actually commit, violence against another human being. It's not just assaults and murders that display that propensity. Someone who designs a weapon, punishes a child, declares war or leaves a hit-and-run victim by the side of the road has defined 'harming another human being' as a justifiable action for himself. How different the world would be if, as a biologically determined characteristic of future human beings, there was such a cognitive inhibition to such actions that people would be incapable of carrying them out, just as most of us are incapable of moving our ears.

It must be the case that that in the brains of everyone, from abusive parents and rapists to arms dealers and heads of state, there can arise a concatenation of nerve impulses which allow someone to see as 'normal' — or at least acceptable — the mutilation, maiming or death of another for one's own pleasure, greed or benefit. Suppose the pattern of that series of impulses was analysable exactly, with future developments of fMRI, PET scans or technology as yet uninvented. Perhaps every decision to kill or harm another person can be traced to a series of nerve impulses that arise in brain centre A, travel in a microsecond to areas B, C, and D, inhibit areas E and F, and lead to a previously unacceptable decision becoming acceptable. Perhaps we would discover a common factor between the brain patterns of someone who is about to murder a child, and a head of state signing a bill to initiate a nuclear weapons programme, or an engineer designing a new type of cluster bomb. All of them accept at some intellectual level that it is perfectly all right for their actions to cause harm or death to another human. The brains of all of them, perhaps, experience pattern D, the 'death pattern'.

If such a specific pattern of brain activity were detectable, could methods then be devised that prevented or disrupted it whenever it was about to arise? At its most plausible — and least socially acceptable — everyone could wear microcircuit-based devices that detected the pattern and suppressed or disrupted it, such that anyone in whom the impulse arose would instantaneously lose any will to carry it out. Less plausible, but still imaginable, would be some sophisticated chemical suppressant of 'pattern D', genetically engineered to act at specific synapses or on specific neurotransmitters, and delivered in some way that reached every single member of the world's population. The 'pattern D suppressant' could be used as a water additive, like chlorine, acceptable now to prevent deaths from dirty water; or as inhalants sprayed from the air; or in genetically modified foodstuffs; even, perhaps, alteration of the germ cell line in one generation that would forever remove pattern D from future generations.

Rapes would be defused before they happened; soldiers — if there were still armies — would be inhibited from firing as their trigger fingers tightened, except of course there would be no one to fire at since enemy soldiers, insurgents, or terrorists would themselves be unable to carry their violent acts to completion.

Would the total elimination of murderous impulses from the human race have a down side? Well, of course, one single person who escaped the elimination process could then rule the world. He — probably a man — could oppress and kill with impunity since no one else would have the will to kill him. Measures would have to be devised to deal with such a situation. Such a person would be so harmful to the human race that, perhaps, plans would have to be laid to control him if he should arise. Tricky, this one, since he couldn't be killed, as there would no one able to kill him or even to design a machine that would kill him, as that also would involve an ability to contemplate the death of another human being.

But setting that possibility aside, what would be the disadvantages of a world in which, chemically or electronically, the ability to kill or harm another human being would be removed from all people? Surely, only good could come from it. Crimes motivated by greed would still be possible, but robberies would be achieved with trickery rather than at the point of a pistol; gang members might attack each other with insults and taunts rather than razors or coshes; governments might play chess to decide on tricky border issues; and deaths from road accidents would go down because even the slightest thought about one's own behaviour causing the death of another would be so reminiscent of 'pattern D' that we would all drive much more carefully to avoid it. Deaths from natural disasters would continue, but charitable giving and international aid in such situations would soar as people realised that not helping to prevent them in future would be almost as bad as the old and now eliminated habit of killing people.

A method to eliminate 'pattern D' will lead to the most significant change ever in the way humans — and therefore societies — behave. And somewhere, in the fields of neurobiology or genetic modification today the germ of that change may already be present.



| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |

next >