| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |

next >




2009

WHAT WILL CHANGE EVERYTHING?


Back to Index

DAVID EAGLEMAN
Assistant Professor of Neuroscience, Baylor College of Medicine; Author, Sum

SILICON IMMORTALITY : DOWNLOADING CONSCIOUSNESS INTO COMPUTERS

While medicine will advance in the next half century, we are not on a crash-course for achieving immortality by curing all disease.  Bodies simply wear down with use.  We are on a crash-course, however, with technologies that let us store unthinkable amounts of data and run gargantuan simulations.  Therefore, well before we understand how brains work, we will find ourselves able to digitally copy the brain's structure and able to download the conscious mind into a computer. 

If the computational hypothesis of brain function is correct, it suggests that an exact replica of your brain will hold your memories, will act and think and feel the way you do, and will experience your consciousness — irrespective of whether it's built out of biological cells, Tinkertoys, or zeros and ones.  The important part about brains, the theory goes, is not the structure, it is about the algorithms that ride on top of the structure.  So if the scaffolding that supports the algorithms is replicated — even in a different medium — then the resultant mind should be identical.  If this proves correct, it is almost certain we will soon have technologies that allow us to copy and download our brains and live forever in silica.  We will not have to die anymore.  We will instead live in virtual worlds like the Matrix.  I assume there will be markets for purchasing different kinds of afterlives, and sharing them with different people — this is future of social networking.  And once you are downloaded, you may even be able to watch the death of your outside, real-world body, in the manner that we would view an interesting movie.

Of course, this hypothesized future embeds many assumptions, the speciousness of any one of which could spill the house of cards.  The main problem is that we don't know exactly which variables are critical to capture in our hypothetical brain scan.  Presumably the important data will include the detailed connectivity of the hundreds of billions of neurons. But knowing the point-to-point circuit diagram of the brain may not be sufficient to specify its function.  The exact three-dimensional arrangement of the neurons and glia is likely to matter as well (for example, because of three-dimensional diffusion of extracellular signals).  We may further need to probe and record the strength of each of the trillions of synaptic connections.  In a still more challenging scenario, the states of individual proteins (phosphorylation states, exact spatial distribution, articulation with neighboring proteins, and so on) will need to be scanned and stored.  It should also be noted that a simulation of the central nervous system by itself may not be sufficient for a good simulation of experience: other aspects of the body may require inclusion, such as the endocrine system, which sends and receives signals from the brain.  These considerations potentially lead to billions of trillions of variables that need to be stored and emulated. 

The other major technical hurdle is that the simulated brain must be able to modify itself. We need not only the pieces and parts, we also the physics of their ongoing interactions — for example, the activity of transcription factors that travel to the nucleus and cause gene expression, the dynamic changes in location and strength of the synapses, and so on. Unless your simulated experiences change the structure of your simulated brain, you will be unable to form new memories and will have no sense of the passage of time.  Under those circumstances, is there any point in immortality? 

The good news is that computing power is blossoming sufficiently quickly that we are likely to make it within a half century.  And note that a simulation does not need to be run in real time in order for the simulated brain to believe it is operating in real time.  There's no doubt that whole brain emulation is an exceptionally challenging problem.  As of this moment, we have no neuroscience technologies geared toward ultra-high-resolution scanning of the sort required — and even if we did, it would take several of the world's most powerful computers to represent a few cubic millimeters of brain tissue in real time.  It's a large problem.  But assuming we haven't missed anything important in our theoretical frameworks, then we have the problem cornered and I expect to see the downloading of consciousness come to fruition in my lifetime.  


ALEXANDER VILENKIN
L. and J. Bernstein Professor in Evolutionary Science; Director, Institute of Cosmology, Tufts University; Author, Many Worlds in One

AVOIDING DOOMSDAY

The long-term prospects of our civilization here on Earth are very uncertain. We can be destroyed by an asteroid impact or a nearby supernova explosion, or we can self-destruct in a nuclear or bacteriological war. It is a matter of not if but when the disaster will strike, and the only sure way for humans to survive in the long run is to spread beyond the Earth and colonize the Galaxy. The problem is that our chances of doing that before we are wiped out by some sort of catastrophe appear to be rather bleak.

The Doomsday argument

The probability for a civilization to survive the existential challenges and colonize its galaxy may be small, but it is non-zero, and in a vast universe such civilizations should certainly exist. We shall call them large civilizations.  There will also be small civilizations which die out before they spread much beyond their native planets.

For the sake of argument, let us assume that small civilizations do not grow much larger than ours and die soon after they reach their maximum size. The total number of individuals who lived in such a civilization throughout its entire history is then comparable to the number of people who ever lived on Earth, which is about 400 billion people, 60 times the present Earth population.

A large civilization contains a much greater number of individuals. A galaxy like ours has about 100 billion stars. We don't know what fraction of stars have planets suitable for colonization, but with a conservative estimate of 0.01% we would still have about 10 million habitable planets per galaxy. Assuming that each planet will reach a population similar to that of the Earth, we get 4 million trillion individuals. (For definiteness, we focus on human-like civilizations, disregarding the planets inhabited by little green people with 1000 people per square inch.) The numbers can be much higher if the civilization spreads well beyond its galaxy. The crucial question is: what is the probability P for a civilization to become large? 

It takes 10 million (or more) small civilizations to provide the same number of individuals as a single large civilization. Thus, unless P is extremely small (less than one in 10 million), individuals live predominantly in large civilizations. That's where we should expect to find ourselves if we are typical inhabitants of the universe. Furthermore, a typical member of a large civilization should expect to live at a time when the civilization is close to its maximum size, since that is when most of its inhabitants are going to live. These expectations are in a glaring conflict with what we actually observe: we either live in a small civilization or at the very beginning of a large civilization. With the assumption that P is not very small, both of these options are very unlikely – which indicates that the assumption is probably wrong.

If indeed we are typical observers in the universe, then we have to conclude that the probability P for a civilization to survive long enough to become large must be very tiny. In our example, it cannot be much more than one in 10 million.

This is the notorious "Doomsday argument". First suggested by Brandon Carter about 35 years ago, it inspired much heated debate and has often been misinterpreted. In the form given here it was discussed by Ken Olum, Joshua Knobe, and me.

Beating the odds

The Doomsday argument is statistical in nature. It does not predict anything about our civilization in particular. All it says is that the odds for any given civilization to grow large are very low. At the same time, some rare civilizations do beat the odds.

What distinguishes these exceptional civilizations? Apart from pure luck, civilizations that dedicate a substantial part of their resources to space colonization, start the colonization process early, and do not stop, stand a better chance of long-term survival.

With many other diverse and pressing needs, this strategy may be difficult to implement, but this may be one of the reasons why large civilizations are so rare. And then, there is no guarantee. Only when the colonization is well underway, and the number of colonies grows faster than they are dying out, can one declare a victory. But if we ever reach this stage in colonization of our Galaxy, this would truly be a turning point in the history of our civilization.

Where are they?

One question that needs to be addressed is: why is our Galaxy not yet colonized? There are stars in the Galaxy that are billions of years older than our Sun, and it should take much less than a billion years to colonize the entire Galaxy. So, we are faced with Enrico Fermi's famous question: Where are they? The most probable answer, in my view, is that we may be the only intelligent civilization in the entire observable universe.

Our cosmic horizon is set by the distance that light has traveled since the big bang. It sets the absolute limit to space colonization, since no civilization can spread faster than the speed of light. There is a large number of habitable planets within our horizon, but are these planets actually inhabited? Evolution of life and intelligence require some extremely improbable events. Theoretical estimates (admittedly rather speculative) suggest that their probability is so low that the nearest planet with intelligent life may be far beyond the horizon. If this is really so, then we are responsible for a huge chunk of real estate, 80 billion light years in diameter. Our crossing the threshold to a space-colonizing civilization would then really change everything. It will make a difference between a "flicker" civilization that blinks in and out of existence and a civilization that spreads through much of the observable universe, and possibly transforms it.


VERENA HUBER-DYSON
Mathematician, Emeritus Professor, Dept of Philosophy, University of Calgary; Author, Gödel's Theorems

HORIZONS BEYOND THE REACH OF BOOLEAN LOGIC, DIGITAL MANIPULATIONS AND NUMERICAL EVALUATIONS

What will change everything is radical paradigm shift in the scientific method that opens up horizons beyond the reach of Boolean Logic, Digital Manipulations and Numerical Evaluations.

Due to my advanced age, I am not likely to witness the change.  But I am seeing signs and have my hunches.  These I will briefly spell out.

To change Everything a radical paradigm shift must interrupt the scientific method's race: 

STOP for a moment's reflection; what are you up to?

How do you know your dog would rather be a cat — just because you prefer cats.  Did you asked him, have you figured out how to ask him?

Having figured out how to do something is not enough reason for actually doing it. That's one aspect of the paradigm shift I am expecting; coming from inside the ranks. Evaluation of scientific results and their potential effects on the world as we know it is of particular urgency these days when news are spreading so easily all over the population. Of course we do not want to regress to a system of classified information that generates elitism. Well this problem is creating the, not so new anymore, philosophical discipline of applied Ethics; if only it keeps scientifically well informed and focused down to earth on concrete issues.

The goal of this part of the shift is a tightening of the structure of the whole conglomerate of the sciences and their presentation in the media.

But this brings me to the more radical effect of the shift I am envisaging; a healing effect on the rift between the endeavors that are bestowed the label "scientific" and the proliferation of so-called "alternative" enterprises, many of which are striving to achieve the blessings of scientific grounding by experimentations, theories and statistical evaluations, whether appropriate or not. A true and fruitful symbiosis that leads to a deeper understanding of the meaning of Human Existence than the models of a machine or of a token created by a Superior Being for the mysterious purpose of suffering through life in the service of His Glory.

Where do I expect the decisive push to come from? Possibly from the young discipline of cognitive science, provided psychology, philosophy and physiology are ready to cooperate.  There are shoots rising up all over, but I won't embark on a list. Once the "real thing" is found or constructed it will be recognized. It will have shape and make sense.

The myth of the scientific method as the only approach to reality will become obsolete without loss to man's interaction with this world.  The path to understanding has to be prepared by a direct, still somewhat mysterious approach of hunches and intuitions in addition to direct perceptions and sensations.  Moreover the results of that procedure are useless unless suitably interpreted.

Well, this is as far as I am ready to go with this explanation of a hunch. The alternative to my current vagueness would be rigidity prone to misinterpretation.

As to my own turf, Mathematics, I do not believe there will be any radical change.  Mathematics is a rock of a structure, here to stay. Mathematical insights do not change, they become clearer, dead ends are recognized as such, but what is proved beyond doubt what is cumulative.   

But methodological changes here are in order as well as meta-mathematical and philosophical interpretation of the nature of results. So is the evolution of an ever more lucid language.

I personally believe we'd do well to focus on Mathematical Intuitionism as our Foundations.  Boolean thinking has done its service by now.

To sum up what I am expecting of this paradigm shift are: clarification, simplification and unification of our understanding and with it the emergence of a more lucidly expressive language conducive to the End of Fragmentation of Knowledge.


ROBERT SAPOLSKY
Neuroscientist, Stanford University; Author, Monkeyluv

PEOPLE WHO CAN INTUIT IN SIX-DIMENSIONS

We humans are pretty impressive when it comes to being able to extract information, to discern patterns from lots of little itsy bitsy data points. Take a musician sitting down with a set of instructions on a piece of paper — sheet music — and being able to turn it into patterned sound. And one step further is the very well-trained musician who can sit and read through printed music, even an entire orchestral score, and hear it in his head, and even feel swept up in emotion at various points in the reading. Even more remarkable is the judge in a composition competition, reading through a work that she has never heard before, able to turn that novel combination of notes into sounds in her head that can be judged to be hackneyed and derivative, or beautiful and original.

And, obviously, we do it in the scientific realm in a pretty major way. We come to understand how something works by being able to make sense of how a bunch of different independent variables interact in generating some endpoint. Oh, so that's how mitochondria have evolved to solve that problem, that's what a temperate zone rain forest does to balance those different environmental forces challenging it. Now I know.

The trouble is that it is getting harder to do that in the life sciences, and this is where something is going to have to happen which will change everything.

The root of the problem is technology outstripping our ability to really make use of it. This isn't so much about the ability to get increasingly reductive biological information. It was relatively some time ago that scientists figured out how to sequence a gene, identify a mutation, get the crystallographic structure of a protein, or measure ion flow through a single channel in a cell.

What the recent development has been is to be able to get staggeringly large amounts of that type of information. We have not just sequenced genes, but sequenced our entire human genome. And we can compare it to that of other species, or can look at genome-wide differences between human populations, or even individuals, or information about tens of thousands of different genes. And then we can look at expression of those genes — which ones are active at which time in which cell types in which individuals in which populations in which species.

We can do epigenomics, where instead of cataloging which genes exist in an individual, we can examine which genes have been modified in a long-term manner to make it easier or harder to activate them (in each particular cell type). Or we can do proteomics, examining which proteins and in what abundance have been made as the end product of the activation of those genes, or post-translational proteomics, examining how those proteins have been modified to change their functions.

Meanwhile, the same ability to generate massive amounts of data has emerged in other realms of the life sciences. For example, it is possible to do near continuous samplings of blood glucose levels, producing minute-by-minute determinations, or do ambulatory cardiology, generating heart beat data 24/7 for days from an individual going about her business, or use state-of-the-art electrophysiological techniques to record the electrical activity of scores of individual neurons simultaneously.

So we are poised to be able to do massive genomo-epigenomo-proteonomo-glyco-endo-neurono-orooni-omic comparisons of the Jonas Brothers with Nelson Mandela with a dinosaur pelvis with Wall-E and thus better understand the nature of life.

The problem, of course, is that we haven't a clue what to do with that much data. By that, I don't mean "merely" how to store, or quantitatively analyze, or present it visually. I mean how to really think about it.

You can already see evidence of this problem in too many microarray papers (this is the approach where you can ask, "In this particular type of tissue, which genes are more active and which less active than usual under this particular circumstance"). With the fanciest versions of this approach, you've got yourself thousands of bits of information at the end. And far too often, what is done with all this suggests that the scientists have hit a wall in terms of being able to squeeze insight out of their study.

For example, the conclusion in the paper might be, "Eleventy genes are more active under this circumstance, whereas umpteen genes are less active, and that's how things work." Or maybe the punch line is, "Of those eleventy genes that are more active, an awful lot of them have something to do with, say, metabolism, how's about that?" Or in a sheepish tone, it might be, "So changes occurred in the activity of eleventy + umpteen different genes, and we don't know what most of them do, but here are three that we do know about and which plausibly have something to do with this circumstance, so we're now going to focus on those three that we already know something about and ignore the rest."

In other words, the technologies have outstripped our abilities to be insightful far too often. We have some crutches — computer graphics allow us to display a three-dimensional scatter plot, rotate it, change it over time. But we still barely hold on.

The thing that is going to change everything will have to wait for, probably, our grandkids. It will come from their growing up with games and emergent networks and who knows what else that (obviously) we can't even imagine. And they'll be able to navigate that stuff as effortlessly as we troglodytes can currently change radio stations while driving while talking to a passenger. In other words, we're not going to get much out of these vast data sets until we have people who can intuit in six-dimensions. And then, watch out.


BRUCE  PARKER
Physical Oceanographer, Stevens Institute of Technology
                                                
THE SUCCESSOR TO NATURAL SELECTION IN HUMANS

Even with all the scientific and technological advances of the last two millennia and especially of the last century, humankind itself has not really changed.  The stories we read in our most ancient books do not seem alien to us.  On the contrary, the humans who wrote those works had the same needs and desires that we have today, though the means of meeting those needs and of fulfilling those desires may have changed in some of the details.

The human species has managed to survive a great number of truly monumental catastrophes, some naturally caused (floods, droughts, tsunamis, glaciations, etc,) and some the result of its own doing, often with the help of  science and technology (especially in creating the tools of war).  But such calamities did not really "change everything."  (This statement is certainly not meant to minimize the tragedy of the millions of lives lost in these catastrophes.)  Though we worry about the possible dramatic effects that an anthropogenically changed global climate might have, humankind itself will survive such changes (because of its science and technology), though we cannot predict how many people might tragically die because of it.

In terms of "what will change everything" the larger view is what will significantly change humankind itself.  From this human perspective, the last "event" that truly changed everything was over some period of time around 50,000 years ago when evolutionary advances finally led to intelligent humans who left Africa and spread out over the rest of the world, literally changing everything in the entire world.

Prior to that evolutionary advance in Africa, our ancestors' main motivations in life were like any other animal — find food and avoid death until they could reproduce.  After they evolved into intelligent beings their motivations in life expanded.  Although they still pursued food and sex and tried to avoid death, they also spent increasing amounts of time on activities aimed at preventing boredom and making them feel good about themselves.  These motivations have not changed in the succeeding millennia, though the means of satisfying themhave changed often.

How humankind came to be what it is today was a result of natural selection.  Humans survived in the hostile environment around them (and went on to Earthly dominance) because of the evolved improvements in their brains.  One improvement was the development of curiosity and a desire to learn about the environment in which they lived.  But it was not simply a matter of becoming smarter.  The human species survived and succeeded in this world as much because of an evolved need for affection or connection with other human beings, a social bond.  It was both its increased intelligence and its increased social cooperation that led to increasing knowledge and eventually science and technology.  Not all individuals, of course, had the same degree of these characteristics, as shown by the wars and horrendous atrocities that humankind was capable of, but social evolution driven by qualities acquired from the previous species evolution did make progress overall.  The greatest progress and the greatest gain in knowledge only happened when people worked with each other in harmony and did not kill each other.

The evolution of human intelligence and cooperative social bonding tendencies took a very long time, though it seems quite fast when appreciating the incredible complexity of this intelligence and social bonding.  How many genes must have mutated and been naturally selected for to achieve this complexity?  We are here today as both a species and a society because of those gene changes and the natural selection process that over this long time period weeded out the bad changes and allowed the good changes to remain.

Technically, evolution of the human species as a result of natural selection stopped when we became a social animal — when the strong began protecting the weak and when our scientific and technological advances allowed us to extend the lives of those individuals unfortunate enough to have genetic weaknesses that would have killed them.  With humans, artificial selection (selective breeding) was never a serious replacement for natural selection possibility, and as a result there have been no significant changes to the human species since its societies began.

But now, with the recent great advances in genetic engineering, we are in a position to change the human species for the first time in 50,000 years.  We will be able to put new genes in any human egg or sperm we wish.  The children born with these new genes will grow up and pass them on to their children.  The extensive use of this genetic selection (or should we call it anthropogenic selection) will rapidly pass new genes and their corresponding (apparently desired) traits throughout the population.  But what will be the overall consequence?  When selecting particular genes that we want while perhaps not understanding how particular gene combinations work, might we unknowingly begin a process that could change our good human qualities? While striving for higher intelligence could we somehow genetically diminish our capacity for compassion, or our inherent need for social bonding?  How might the human species be changed in the long run?  The qualities that got us here — the curiosity, the intelligence, the compassion and cooperation resulting from our need for social bonding– involve an incredibly complex combination of genes.  Could these have been produced through genetic planning?

Our ever-expanding genetic capabilities will certainly "change everything" with respect to medicine and health, which will be a great benefit.  Our life span will also be greatly extended, a game-changing benefit to be sure, but it will also add to our overpopulation, the ultimate source of so many problems on our planet.  But the ultimate effect may be on the human species itself.  How many generations might it take before the entire human race is significantly altered genetically?  From a truly human perspective, that would really "change everything."


JAMES GEARY
Former Europe editor, Time Magazine; Author, Geary's Guide to the World's Great Aphorists

BRAIN-MACHINE INTERFACE (BMI)

J. Craig Venter may be on the brink of creating the first artificial life form, but one game-changing scientific idea I expect to live to see is the moment when a robotic device achieves the status of "living thing." What convinces me of this is not some amazing technological breakthrough, but watching some videos of the annual RoboCup soccer tournament organized by Georgia Tech in Atlanta. The robotics researchers behind RoboCup are determined to build a squad of robots capable of winning against the world champion human soccer team. For now, they are just competing against other robots.

For a human being to raise a foot and kick a soccer ball is an amazingly complex event, involving millions of different neural computations co-ordinated across several different brain regions. For a robot to do it — and to do it as gracefully as members of the RoboCup Humanoid League — is a major technical accomplishment. The cuddlier, though far less accomplished quadrupeds in the Four Legged League are also a wonder to behold. Plus, the robots are not programmed to do this stuff; they learn to do it, just like you and me.

These robots are marvels of technological ingenuity. They are also "living" proof of how easily, eagerly even, we can anthropomorphize robots — and why I expect there won't be much of a fuss when these little metallic critters start infiltrating our homes, offices, and daily lives.

I also expect to see the day when robots like these have biological components (i.e. some wetware to go along with their hardware) and when human beings have internal technological components (i.e. some hardware to go along with our wetware). Researchers at the University of Pittsburgh have trained two monkeys to munch marshmallows using a robotic arm controlled by their own thoughts. During voluntary physical movements, such as reaching for food, nerve cells in the brain start firing well before any movement actually takes place. It's as if the brain warms up for an impending action by directing specific clusters of neurons to fire, just as a driver warms up a car by pumping the gas pedal. The University of Pittsburgh team implanted electrodes in this area of the monkeys' brains and connected them to a computer operating the robotic limb. When the monkeys thought about reaching for a marshmallow, the mechanical arm obeyed that command. In effect, the monkeys had three arms for the duration of the experiments.

In humans, this type of brain-machine interface (BMI) could allow paralyzed individuals to control prosthetic body parts as well as open up new fields of entertainment and exploration. "The body's going to be very different 100 years from now," Miguel Nicolelis, Anne W. Deane Professor of Neuroscience at Duke University and one of the pioneers of the BMI, has said. "In a century's time you could be lying on a beach on the east coast of Brazil, controlling a robotic device roving on the surface of Mars, and be subjected to both experiences simultaneously, as if you were in both places at once. The feedback from that robot billions of miles away will be perceived by the brain as if it was you up there."

In robots, a BMI could become a kind of mind. If manufacturers create such robots with big wet puppy dog eyes — or even wearing the face of a loved one or a favorite film star — I think we'll grow to like them pretty quickly. When they have enough senses and "intelligence," then I'm convinced that these machines will qualify as living things. Not human beings, by any means; but kind of like high-tech pets. And turning one off will be the moral equivalent of shooting your dog.


DAVID BODANIS
Writer; Consultant; Author, Passionate Minds

MASSIVE TECHNOLOGICAL FAILURE

The big one coming up is going to be massive technological failure: so strong that it will undermine faith in science for a generation or more.

It's going to happen because science is expanding at a fast rate, and over the past few centuries, the more science we've had, then — albeit with some time lags — the more powerful technology we have had.

That's where the problem will arise. With each technology, the amplitude of its effects gets greater: both positive and negative. Automobiles, for example, are an early 20th century technology (based on 18th and 19th century science), which caused a certain amount of increased mobility, as well as a certain number of traffic deaths. The amount on each side was large, but not so large that the negative effects couldn't be accepted. Even when the negative effects came to be understood to include land-use problems or pollution, those have still generally been considered manageable. There's little desire to terminate all scientific inquiry because of them.

Nuclear power is a mid 20th century technology (based on early 20th century science). Its overall power is greater still, and so is the amplitude of its destructive possibilities. Through good chance its negative use has, so far, been restricted to the destruction of two cities. Yet even that led to a great wave of generalized, anti-scientific feeling, not least from among the many people who'd always felt it's impious to interfere with the plans of God.

The internet is any many ways an even more powerful technology (based on early 20th century quantum mechanics, and mid 20th century information theory). So far its problems have been manageable, be those in surveillance of personal activity, or virus-like intrusions which interrupt important services. But the internet will get stronger and more widespread, as will the collaborative and other tools allowing its misuse: the negative effects will be greater still.

Thus the dynamic we face. Science brings magic from the heavens. In the next few decades, clearly, it will get stronger. Yet just as inevitably, some one of its negative amplitudes — be it in harming health, or security, or something as yet unrecognized — will pass an acceptable threshold. When that happens, society is unlikely to respond with calm guidelines. Instead, there will be blind fury against everything science has done.


ANDRIAN KREYE
Arts & Ideas Editor, Sueddeutsche Zeitung, Munich

A NEW APPROACH TO ENERGY PRODUCTION

It should be an easy transition. Instead of thinking about energy as a commodity to harvest, new sources of power will be manufactured. The medieval quest for new sources of that life force called energy will be over including all those white knights on horses conquering the wild lands where those sources happen to be. Technologically this will mean a shift from an energy industry dominated by geologists and engineers to a wave of innovations driven by biologists and chemists.

The thought process itself has already been set in motion. The surge of first generation bio-fuels has been based on the idea of renewable sources of energy. Still most alternative energies like solar and wind power are still based on the old way of thinking about harvesting. Most bio-fuels are preceded by a literal harvest of crops. Craig Venter's work on a microorganism that can transform CO2, sunlight and water into fuel is already jumping quite a few steps ahead.

This new approach will drastically reduce the EIoER formula, which has so far slowed down the commercial viability of most innovations in the search for alternative sources of energy. Any fuel that can be synthetically "grown" in a lab or factory will be economically much more viable for mass production than the conversion of sunlight, wind or agricultural goods.

Lab-based production of synthetic sources of energy will also end the geopolitical dependencies now tied to the consumption of power and thus change the course of recent history in the most dramatic fashion. This will eliminate the sources of many current and future conflicts, first and foremost in the Gulf region, but also in the Northern part of South America, in the Black Sea region and the increasingly exploitable Arctic.

The introduction of biological processes into the energy cycle will also minimize the impact of energy consumption on the environment. If made available cheaply, possibly as an open source endeavor, it will allow emerging nations to develop new arable land and create wealth while avoiding conflict and environmental negligence.

There could of course be downsides to the emergence of new sources of energy. Transitions are never easy, no matter how benign or progressive. The loss of economical and political power by oil- and gas-producing nations and corporations could become a new, if temporary source of conflict. Unforeseen dangers in the production might emerge impacting environment and public health. New monopolies could be formed.

The shift from harvesting to manufacturing energy would not only impact economy, politics and environment. Turning mankind from mere harvesters of energy into manufacturers would lead to a whole new way of thinking, that could lead to even greater innovations. Because every form of economical and technological empowerment always initiates leaps that go way beyond the practical application of new technologies. It's hard to predict where a new mindset will lead. One thing is for sure it almost always leads to new freedoms and enlightenments.


JAMSHED BHARUCHA
Professor of Psychology, Provost, Senior Vice President, Tufts University

THE SYNCHRONIZATION OF BRAINS

An understanding of how brains synchronize — or fail to do so — will be a game-changing scientific development.

Few behavioral forces are as strong as the delineation of in-groups and out-groups: 'us' and 'them'. Group affiliation requires alignment, coupling or synchronization of the brain states of members. Synchronization yields cooperative behavior, promotes group cohesion, and creates a sense of group agency greater than the sum of the individuals in the group. In the extreme, synchronization yields herding behavior. The absence of synchronization yields conflict.

People come under the grip of ideologies, emotions and moods are infectious, and memes spread rapidly through populations. Ethnic, religious, and political groups act as monolithic forces. Mobs, cults and militias are characterized by the melding of large numbers of individuals into larger units, such that the brains of individuals operate in lockstep – a single organism controlled by a single — distributed — nervous system.

Leaders who mobilize large followings have an intuitive ability to synchronize brains or to plug into systems that already are synchronized.

Herding behavior has received a great deal of attention in economics. In the recent financial bubble that eventually burst, investors and regulators were swept up by a wave of blinding optimism and over-confidence. Contrary information was discounted, and analysis from first principles ignored.

Herding behavior is prevalent in times of war. A group that perceives itself to be under attack binds together as a collective fighting unit, without questioning. When swift synchronization is critical and the stakes are high, psychological forces such as duty, loyalty, conformity, compliance – all of which promote group cohesion — come to the fore, overwhelming the rational faculties of individual brains.

Synchronization is found in many species, although the mechanisms may not be the same. Flocks of birds fly in tight formation. Fish swim in schools, and to a distant observer appear as one aggregated organism. Wolves hunt in packs. Some instances of synchronization are driven by environmental cues that regulate individual brains in the same way. For example, light cycles and seasonal cycles can entrain biorhythms of individuals who share the genetic predisposition to be regulated in this way. In other cases, the co-evolution of certain behaviors together with the perception of these behaviors holds individuals together, as in the ability to both produce and recognize species-specific vocalizations.

Synchronization is mediated by communication between brains. Communicative channels include language as well as non-verbal modes such as facial expressions, gestures, tone of voice, and music. Communication across regions of an individual brain is simply a special case of a system that includes communication between brains.

Elsewhere I have argued that music serves to synchronize brain states involved in emotion, movement, and the recognition of patterns — thereby promoting group cohesion. As with tradition or ritual, what's being synchronized needn't have intrinsic utility; it may not matter what's being synchronized. The very fact of synchronization can be a powerful source of group agency.

Just around the corner is an explosion of research that regards individual brains as nodes in a system bound together by multiple channels of communication. Information technology has provided novel ways for brains to align across great distances and over time. When a song becomes a hit, millions around the world are aligned, forming a virtual unit. In the future, brain prostheses and artificial interfaces for biological systems will add to the picture.

Some clues are emerging about how brains synchronize. The hot recent discovery is the existence of mirror neurons — brain cells that respond to the actions of other individuals as if one were making them oneself. Mirror systems are thought to generate simulations of the behavior of others in one's own brain, enabling mimicking and empathy. Other pieces of the puzzle have been accumulating for a while. Certain cases of frontal lobe damage result in asocial behavior.

Recent work on autism has drawn attention to the mechanisms whereby individuals connect with others. The brain facilitates (sometimes in unfortunate ways) the categorization of oneself and others into in-groups and out-groups. When white participants in an MRI machine view pictures of faces, the amygdala in the left hemisphere of the brain is more strongly activated when the faces are black than when they are white. The brain has circuits specialized for the perception of faces, which convey enormous amounts of information that enable us to recognize people and gauge their emotions and intentions.

Understanding how brains synchronize to form larger systems of behavior will have vast consequences for our grasp of group dynamics, interpersonal relations, education and politics. It will influence how we make sense of — and manage — the powerful unifying forces that constitute group behavior. For better and for worse, it will guide the development of technologies designed to interface with brains, spread knowledge, shape attitudes, elicit emotions, and stimulate action. As with all technological advances, leaders will seize on them to either improve the human condition or consolidate power.

Not all individuals are susceptible to synchronizing with others. Some reject the herd and lose out. Some chart a new course and become leaders. Being contrarian often requires enduring the psychological forces of stigma and ridicule. Understanding the conditions under which people resist will be part of the larger understanding of synchronization.

Understanding how brains synchronize – or fail to do so — will not emerge from a single new idea, but rather from a complex puzzle of scientific advances woven together. What is game-changing is that only recently have researchers begun to frame questions about brain function in terms not of individual brains but rather in terms of how individual brains are embedded in larger social and environmental systems that drive their evolution and development. This new way of framing brain and cognitive science — together with unforeseen technological developments — promises transformational integrations of current and future knowledge about how brains interact.


DANIEL GOLEMAN
Psychologist; Author, Ecological Intelligence

WHY DON'T RUNNING SHOES BIODEGRADE?

Every manmade object — all the things in our homes and workplace — has an invisible back story, a litany of sorry impacts over the course of the journey from manufacture to use to disposal. Take running shoes.

Despite the bells and whistles meant to make one brand of running shoe appeal more than another, at base they all reduce to three parts. The shoe's upper consists of nylon with decorative bits of plastics or synthetic leather. The "rubber" sole for most shoes is a petroleum-based synthetic, as is the spongy midsole, composed of ethylene vinyl acetate. Like any petrochemical widget, manufacturing the soles produces unfortunate byproducts, among them benzene, toluene, ethyl benzene, and xylene. In environmental health circles these are known as the "Big Four" toxics, being variously carcinogens, central nervous system disrupters, and respiratory irritants, among other biological irritants. 

Those bouncy air pockets in some shoe soles contain an ozone-depleting gas. The decorative bits of plastic piping harbor PVC, which endangers the health of workers who make it, and contaminate the ecosystems around the dumps where we eventually send our shoes. The solvents in glues that bind the outsole to midsole can damage the lungs of the workers who apply it. Tanning leather shoe tops can expose workers to hexavalent chromium and other carcinogens.

 I remember my high school chemistry teacher's enthusiasm for the chemical reaction that rendered nitrogen fertilizer from ammonia (he moonlighted in a local fertilizer factory); we never heard a word about eutrophication, the dying of aquatic life due to fertilizer runoff that creates a frenzy of algae growth, depleting the water's oxygen. Likewise, coal-burning electric plants seemed a marvel when first deployed: cheap electricity from a virtually inexhaustible source. Who knew about respiratory disease from particulates, let alone global warming?

The full list of adverse impacts on the environment or the health of those who make or use any product can run into hundreds of such details. The reason: most all of the manufacturing methods and industrial chemicals in common use today were invented in a day when little or no attention was paid to their negative impacts on the planet or its people.

We have inherited an industrial legacy from the 20th-century which no longer meets the needs of the 21st. As we awaken from our collective naivete about such hidden costs, we are reaching a pivot point where we can question hidden assumptions. We can ask, for example, why not have running shoes that are not just devoid of toxins, but also can eventually be tossed out in a compost pile to biodegrade? We can rethink everything we make, developing alternative ingredients and processes with far less — or ideally, no — adverse health or environmental impacts.

The singular force that can drive this transformation of every manmade thing for the better is neither government fiat nor the standard tactics of environmentalists, but rather radical transparency in the marketplace. If we as buyers can know the actual ecological impacts of the stuff we buy at the point of purchase, and can compare those impacts to competing products, we can make better choices. The means for such radical transparency has already launched. Software innovations now allow any of us to access a vast database about the hidden harms in whatever we are about to buy, and to do this where it matters most, at the point of purchase. As we stand in the aisle of a store, we can know which brand has the fewest chemicals of concern, or the better carbon footprint. In the Beta version of such software, you click your cell phone's camera on a product's bar code, and get an instant readout of how this brand compares to competitors on any of hundreds of environmental, health, or social impacts. In a planned software upgrade, that same comparison would go on automatically with whatever you buy on your credit card, and suggestions for better purchases next time you shop would routinely come your way by email.

Such transparency software converts shopping into a vote, letting us target manufacturing processes and product ingredients we want to avoid, and rewarding smarter alternatives. As enough of us apply these decision rules, market share will shift, giving companies powerful, direct data on what shoppers want — and want to avoid — in their products.

Creating a market force that continually leverages ongoing upgrades throughout the supply chain could open the door to immense business opportunities over the next several decades. We need to reinvent industry, starting with the most basic platforms in industrial chemistry and manufacturing design. And that would change every thing.


| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |

next >