1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |10 | 11 | 12



2006

"WHAT IS YOUR DANGEROUS IDEA?"


Printer version

CONTRIBUTORS


Alun Anderson

Philip W. Anderson

Scott Atran

Mahzarin Banaji

Simon Baron-Cohen

Samuel Barondes

Gregory Benford

Jesse Bering

Jeremy Bernstein

Jamshed Bharucha

Susan Blackmore

Paul Bloom

David Bodanis

Stewart Brand

Rodney Brooks

David Buss

Philip Campbell

Leo Chalupa

Andy Clark

Gregory Cochran
Jerry Coyne

M. Csikszentmihalyi

Richard Dawkins

Paul Davies

Stanislas Deheane

Daniel C. Dennett
Keith Devlin
Jared Diamond
Denis Dutton
Freeman Dyson
George Dyson
Juan Enriquez

Paul Ewald

Todd Feinberg

Eric Fischl

Helen Fisher

Richard Foreman

Howard Gardner

Joel Garreau

David Gelernter

Neil Gershenfeld

Danie Gilbert

Marcelo Gleiser

Daniel Goleman

Brian Goodwin

Alison Gopnik

April Gornik

John Gottman

Brian Greene

Diane F. Halpern

Haim Harari

Judith Rich Harris

Sam Harris

Marc D. Hauser

W. Daniel Hillis

Donald Hoffman

Gerald Holton
John Horgan

Nicholas Humphrey

Piet Hut

Marco Iacoboni

Eric R. Kandel

Kevin Kelly

Bart Kosko

Stephen Kosslyn
Kai Krause
Lawrence Krauss

Ray Kurzweil

Jaron Lanier

David Lykken

Gary Marcus
Lynn Margulis
Thomas Metzinger
Geoffrey Miller

Oliver Morton

David G. Myers

Michael Nesmith

Randolph Nesse

Richard E. Nisbett

Tor Nørretranders

James O'Donnell

John Allen Paulos

Irene Pepperberg

Clifford Pickover

Steven Pinker

David Pizarro

Jordan Pollack

Ernst Pöppel

Carolyn Porco

Robert Provine

VS Ramachandran

Martin Rees

Matt Ridley

Carlo Rovelli

Rudy Rucker

Douglas Rushkoff

Karl Sabbagh

Roger Schank

Scott Sampson

Charles Seife

Terrence Sejnowski

Martin Seligman

Robert Shapiro
Rupert Sheldrake

Michael Shermer

Clay Shirky

Barry Smith

Lee Smolin

Dan Sperber

Paul Steinhardt

Steven Strogatz
Leonard Susskind

Timothy Taylor

Frank Tipler

Arnold Trehub

Sherry Turkle

J. Craig Venter

Philip Zimbardo

SETH LLOYD
Quantum Mechanical Engineer, MIT

The genetic breakthrough that made people capable of ideas themselves

The most dangerous idea is the genetic breakthrough that made people capable of ideas themselves. The idea of ideas is nice enough in principle; and ideas certainly have had their impact for good. But one of these days one of those nice ideas is likely to have the unintended consequence of destroying everything we know. 

Meanwhile, we cannot not stop creating and exploring new ideas: the genie of ingenuity is out of the bottle. To suppress the power of ideas will hasten catastrophe, not avert it. Rather, we must wield that power with the respect it deserves. 

Who risks no danger reaps no reward.


CAROLYN PORCO
Planetary Scientist; Cassini Imaging Science Team Leader; Director CICLOPS, Boulder CO; Adjunct Professor, University of Colorado, University of Arizona

The Greatest Story Ever Told

The confrontation between science and formal religion will come to an end when the role played by science in the lives of all people is the same played by religion today.

And just what is that?

At the heart of every scientific inquiry is a deep spiritual quest — to grasp, to know, to feel connected through an understanding of the secrets of the natural world, to have a sense of one's part in the greater whole. It is this inchoate desire for connection to something greater and immortal, the need for elucidation of the meaning of the 'self', that motivates the religious to belief in a higher 'intelligence'. It is the allure of a bigger agency — outside the self but also involving, protecting, and celebrating the purpose of the self — that is the great attractor. Every culture has religion. It undoubtedly satisfies a manifest human need.

But the same spiritual fulfillment and connection can be found in the revelations of science. From energy to matter, from fundamental particles to DNA, from microbes to Homo sapiens, from the singularity of the Big Bang to the immensity of the universe .... ours is the greatest story ever told. We scientists have the drama, the plot, the icons, the spectacles, the 'miracles', the magnificence, and even the special effects. We inspire awe. We evoke wonder.

And we don't have one god, we have many of them. We find gods in the nucleus of every atom, in the structure of space/time, in the counter-intuitive mechanisms of electromagneticsm. What richness! What consummate beauty!

We even exalt the `self'. Our script requires a broadening of the usual definition, but we too offer hope for everlasting existence. The `self' that is the particular, networked set of connections of the matter comprising our mortal bodies will one day die, of course. But the `self' that is the sum of each separate individual condensate in us of energy-turned-matter is already ancient and will live forever. Each fundamental particle may one day return to energy, or from there revert back to matter. But in one form or another, it will not cease. In this sense, we and all around us are eternal, immortal, and profoundly connected. We don't have one soul; we have trillions upon trillions of them.

These are reasons enough for jubilation ... for riotous, unrestrained, exuberant merry-making.

So what are we missing?

Ceremony.

We lack ceremony. We lack ritual. We lack the initiation of baptism, the brotherhood of communal worship.

We have no loving ministers, guiding and teaching the flocks in the ways of the 'gods'. We have no fervent missionaries, no loyal apostles. And we lack the all-inclusive ecumenical embrace, the extended invitation to the unwashed masses. Alienation does not warm the heart; communion does.

But what if? What if we appropriated the craft, the artistry, the methods of formal religion to get the message across? Imagine 'Einstein's Witnesses' going door to door or TV evangelists passionately espousing the beauty of evolution.

Imagine a Church of Latter Day Scientists where believers could gather. Imagine congregations raising their voices in tribute to gravity, the force that binds us all to the Earth, and the Earth to the Sun, and the Sun to the Milky Way. Or others rejoicing in the nuclear force that makes possible the sunlight of our star and the starlight of distant suns. And can't you just hear the hymns sung to the antiquity of the universe, its abiding laws, and the heaven above that 'we' will all one day inhabit, together, commingled, spread out like a nebula against a diamond sky?

One day, the sites we hold most sacred just might be the astronomical observatories, the particle accelerators, the university research installations, and other laboratories where the high priests of science — the biologists, the physicists, the astronomers, the chemists — engage in the noble pursuit of uncovering the workings of nature herself. And today's museums, expositional halls, and planetaria may then become tomorrow's houses of worship, where these revealed truths, and the wonder of our interconnectedness with the cosmos, are glorified in song by the devout and the soulful.

"Hallelujah!", they will sing. "May the force be with you!"


MICHAEL NESMITH
Artist, writer; Former cast member of "The Monkees"; A Trustee and President of the Gihon Foundation and a Trustee and Vice-Chair of the American Film Institute

Existence is Non-Time, Non-Sequential, and Non-Objective

Not a dangerous idea per se but like a razor sharp tool in unskilled hands it can inflect unintended damage.

Non-Time drives forward the notion the past does not create the present. This would of course render evolutionary theory a local-system, near-field process that was non-causative (i.e. effect).

Non-Sequential reverberates through the Turing machine and computation, and points to simultaneity. It redefines language and cognition.

Non-Objective establishes a continuum not to be confused with solipsism. As Schrödinger puts it when discussing the "time-hallowed discrimination between subject and object" — "the world is given to me only once, not one existing and one perceived. Subject and object are only one. The barrier between them cannot be said to have broken down as a result of recent experience in the physical sciences, for this barrier does not exist". This continuum has large implications for the empirical data set, as it introduces factual infinity into the data plane.

These three notions, Non-Time, Non-sequence, and Non-Object have been peeking like diamonds through the dust of empiricism, philosophy, and the sciences for centuries. Quantum mechanics, including Deutsch's parallel universes and the massive parallelism of quantum computing, is our brightest star — an unimaginably tall peak on our fitness landscape.

They bring us to a threshold over which empiricism has yet to travel, through which philosophy must reconstruct the very idea of ideas, and beyond which stretches the now familiar "uncharted territories" of all great adventures.


LAWRENCE KRAUSS
Physicist/Cosmologist, Case Western Reserve University; Author, Hiding in the Mirror

The world may fundamentally be inexplicable

Science has progressed for 400 years by ultimately explaining observed phenomena in terms of fundamental theories that are rigid. Even minor deviations from predicted behavior are not allowed by the theory, so that if such deviations are observed, these provide evidence that the theory must be modified, usually being replaced by a yet more comprehensive theory that fixes a wider range of phenomena.   

The ultimate goal of physics, as it is often described, is to have a "theory of everything", in which all the fundamental laws that describe nature can neatly be written down on the front of a T-shirt (even if the T-shirt can only exist in 10 dimensions!). However, with the recognition that the dominant energy in the universe resides in empty space — something that is so peculiar that it appears very difficult to understand within the context of any  theoretical ideas we now possess — more physicists have been exploring the idea that perhaps physics is an 'environmental  science', that the laws of physics we observe are merely accidents of our circumstances, and  that an infinite number of different universe could exist with  different laws of physics.

This is true even if there does exist some fundamental candidate mathematical physical theory. For example, as is currently in vogue in an idea related to string  theory, perhaps the fundamental theory allows an infinite number of different 'ground state' solutions, each of which describes a different possible universe with a consistent set of physical laws and physical dimensions.

It might be that the only way to understand why the laws of nature we observe in our universe are the way they are is to understand that if they were any different, then  life could not have arisen in our universe, and we would thus not be here to measure them today.

This is one version of the infamous "anthropic principle". But it could actually be worse — it is equally likely that many different combinations of laws would allow life to form, and that it is a pure accident that the constants of nature result in the combinations we experience in our universe. Or, it could be that the mathematical formalism is actually so complex so that the ground states of the theory, i.e. the set of possible states that might describe our universe, actually might not  be determinable.  

In this case, the end of "fundamental" theoretical physics (i.e. the search for fundamental microphysical laws...there will still be lots of work for physicists who try to understand the host of complex phenomena occurring at a variety of larger scales) might occur not via a theory of everything, but rather with the recognition that all so-called fundamental theories that might describe nature would be purely "phenomenological", that is, they would be derivable from observational phenomena, but would not reflect any underlying grand mathematical structure of the universe  that would allow a basic understanding of why the universe is the way it is.


DANIEL C. DENNETT
Philosopher; University Professor, Co-Director, Center for Cognitive Studies, Tufts University; Author,
Darwin's Dangerous Idea

There aren't enough minds to house the population explosion of memes

Ideas can be dangerous. Darwin had one, for instance. We hold all sorts of inventors and other innovators responsible for assaying, in advance, the environmental impact of their creations, and since ideas can have huge environmental impacts, I see no reason to exempt us thinkers from the responsibility of quarantining any deadly ideas we may happen to come across. So if I found what I took to be such a dangerous idea, I would button my lip until I could find some way of preparing the ground for its safe expression. I expect that others who are replying to this year's Edge question have engaged in similar reflections and arrived at the same policy. If so, then some people may be pulling their punches with their replies. The really dangerous ideas they are keeping to themselves.

But here is an unsettling idea that is bound to be true in one version or another, and so far as I can see, it won't hurt to publicize it more. It might well help.

The human population is still growing, but at nowhere near the rate that the population of memes is growing. There is competition for the limited space in human brains for memes, and something has to give.  Thanks to our incessant and often technically brilliant efforts, and our apparently insatiable appetites for novelty, we have created an explosively growing flood of information, in all media, on all topics, in every genre. Now either (1) we will drown in this flood of information, or (2) we won't drown in it. Both alternatives are deeply disturbing. What do I mean by drowning? I mean that we will become psychologically overwhelmed, unable to cope, victimized by the glut and unable to make life-enhancing decisions in the face of an unimaginable surfeit. (I recall the brilliant scene in the film of Evelyn Waugh's dark comedy The Loved One in which embalmer Mr. Joyboy's gluttonous mother is found sprawled on the kitchen floor, helplessly wallowing in the bounty that has spilled from a capsized refrigerator.) We will be lost in the maze, preyed upon by whatever clever forces find ways of pumping money–or simply further memetic replications–out of our situation. (In The War of the Worlds, H. G. Wells sees that it might well be our germs, not our high-tech military contraptions, that subdue our alien invaders. Similarly, might our own minds succumb not to the devious manipulations of evil brainwashers and propagandists, but to nothing more than a swarm of irresistible ditties, Noφs nibbled to death by slogans and one-liners?)   

If we don't drown, how will we cope?  If we somehow learn to swim in the rising tide of the infosphere, that will entail that we–that is to say, our grandchildren and their grandchildren–become very very different from our recent ancestors. What will "we"  be like?  (Some years ago, Doug Hofstadter wrote a wonderful piece, " In 2093, Just Who Will Be We?" in which he imagines robots being created to have "human" values, robots that gradually take over the social roles of our biological descendants, who become stupider and less concerned with the things we value. If we could secure the welfare of just one of these groups, our children or our brainchildren, which group would we care about the most, with which group would we identify?)

Whether "we" are mammals or robots in the not so distant future, what will we know and what will we have forgotten forever, as our previously shared intentional objects recede in the churning wake of the great ship that floats on this sea and charges into the future propelled by jets of newly packaged information?   What will happen to our cultural landmarks?  Presumably our descendants will all still recognize a few reference points (the pyramids of Egypt, arithmetic, the Bible, Paris, Shakespeare, Einstein, Bach . . . ) but as wave after wave of novelty passes over them, what will they lose sight of?  The Beatles are truly wonderful, but if their cultural immortality is to be purchased by the loss of such minor 20th century figures as Billie Holiday, Igor Stravinsky, and Georges Brassens [who he?], what will remain of our shared understanding?

The intergenerational mismatches that we all experience in macroscopic versions (great-grandpa's joke falls on deaf ears, because nobody else in the room knows that Nixon's wife was named "Pat") will presumably be multiplied to the point where much of the raw information that we have piled in our digital storehouses is simply incomprehensible to everyone–except that we will have created phalanxes of "smart" Rosetta-stones of one sort or another that can "translate" the alien material into something we (think maybe we) understand. I suspect we hugely underestimate the importance (to our sense of cognitive security) of our regular participation in the four-dimensional human fabric of mutual understanding, with its reassuring moments of shared–and seen to be shared, and seen to be seen to be shared–comprehension.

What will happen to common knowledge in the future?  I do think our ancestors had it easy: aside from all the juicy bits of unshared gossip and some proprietary trade secrets and the like, people all knew pretty much the same things, and knew that they knew the same things. There just wasn't that much to know.  Won't people be able to create and exploit illusions of common knowledge in the future, virtual worlds in which people only think they are in touch with their cyber-neighbors? 

I see small-scale projects that might protect us to some degree, if they are done wisely. Think of all the work published in academic journals before, say, 1990 that is in danger of becoming practically invisible to later researchers because it can't be found on-line with a good search engine. Just scanning it all and hence making it "available" is not the solution. There is too much of it. But we could start projects in which (virtual) communities of retired  researchers who still have their wits about them and who know particular literatures well could brainstorm amongst themselves, using their pooled experience to elevate the forgotten gems, rendering them accessible to the next generation of researchers. This sort of activity has in the past been seen to be a  stodgy sort of scholarship, fine for classicists and historians, but not fit work for cutting-edge scientists and the like. I think we should try to shift this imagery and help people recognize the importance of providing for each other this sort of pathfinding through the forests of information. It's a drop in the bucket, but perhaps if we all start thinking about conservation of valuable mind-space, we can save ourselves (our descendants) from informational collapse.


DANIEL GILBERT
Psychologist, Harvard University

The idea that ideas can be dangerous

Dangerous does not mean exciting or bold. It means likely to cause great harm. The most dangerous idea is the only dangerous idea: The idea that ideas can be dangerous.

We live in a world in which people are beheaded, imprisoned, demoted, and censured simply because they have opened their mouths, flapped their lips, and vibrated some air. Yes, those vibrations can make us feel sad or stupid or alienated. Tough shit. That's the price of admission to the marketplace of ideas. Hateful, blasphemous, prejudiced, vulgar, rude, or ignorant remarks are the music of a free society, and the relentless patter of idiots is how we know we're in one. When all the words in our public conversation are fair, good, and true, it's time to make a run for the fence.


ANDY CLARK
School of Philosophy, Psychology and Language Sciences, Edinburgh University

The quick-thinking zombies inside us

So much of what we do, feel, think and choose is determined by non-conscious, automatic uptake of cues and information.

Of course, advertisers will say they have known this all along. But only in recent years, with seminal studies by Tanya Chartrand, John Bargh and others has the true scale of our daily automatism really begun to emerge. Such studies show that it is possible (it is relatively easy) to activate racist stereotypes that impact our subsequent behavioral interactions, for example yielding the judgment that your partner in a subsequent game or task is more hostile than would be judged by an unprimed control. Such effects occur despite a subject's total and honest disavowal of those very stereotypes. In similar ways it is possible to unconsciously prime us to feel older (and then we walk more slowly).

In my favorite recent study, experimenters manipulate cues so that the subject forms an unconscious goal, whose (unnoticed) frustration makes them lose confidence and perform worse at a subsequent task! The dangerous truth, it seems to me, is that these are not isolated little laboratory events. Instead, they reveal the massed woven fabric of our day-to-day existence. The underlying mechanisms at work impart an automatic drive towards the automation of all manner of choices and actions, and don't discriminate between the 'trivial' and the portentous.

It now seems clear that many of my major life and work decisions are made very rapidly, often on the basis of ecologically sound but superficial cues, with slow deliberative reason busily engaged in justifying what the quick-thinking zombies inside me have already laid on the table. The good news is that without these mechanisms we'd be unable to engage in fluid daily life or reason at all, and that very often they are right. The dangerous truth, though, is that we are indeed designed to cut conscious, aware choice out of the picture wherever possible. This is not an issue about free will, but simply about the extent to which conscious deliberation cranks the engine of behavior. Crank it it does: but not in anything like the way, or extent, we may have thought. We'd better get to grips with this before someone else does.


SHERRY TURKLE
Psychologist, MIT; Author, Life on the Screen: Identity in the Age of the Internet

After several generations of living in the computer culture, simulation will become fully naturalized. Authenticity in the traditional sense loses its value, a vestige of another time.

Consider this moment from 2005: I take my fourteen-year-old daughter to the Darwin exhibit at the American Museum of Natural History. The exhibit documents Darwin's life and thought, and with a somewhat defensive tone (in light of current challenges to evolution by proponents of intelligent design), presents the theory of evolution as the central truth that underpins contemporary biology. The Darwin exhibit wants to convince and it wants to please. At the entrance to the exhibit is a turtle from the Galapagos Islands, a seminal object in the development of evolutionary theory. The turtle rests in its cage, utterly still. "They could have used a robot," comments my daughter. It was a shame to bring the turtle all this way and put it in a cage for a performance that draws so little on the turtle's "aliveness." I am startled by her comments, both solicitous of the imprisoned turtle because it is alive and unconcerned by its authenticity. The museum has been advertising these turtles as wonders, curiosities, marvels — among the plastic models of life at the museum, here is the life that Darwin saw. I begin to talk with others at the exhibit, parents and children. It is Thanksgiving weekend. The line is long, the crowd frozen in place. My question, "Do you care that the turtle is alive?" is welcome diversion. A ten year old girl would prefer a robot turtle because aliveness comes with aesthetic inconvenience: "It's water looks dirty. Gross." More usually, the votes for the robots echo my daughter's sentiment that in this setting, aliveness doesn't seem worth the trouble. A twelve-year-old girl opines: "For what the turtles do, you didn't have to have the live ones." Her father looks at her, uncomprehending: "But the point is that they are real, that's the whole point."

The Darwin exhibit is about authenticity: on display are the actual magnifying glass that Darwin used, the actual notebooks in which he recorded his observations, indeed, the very notebook in which he wrote the famous sentences that first described his theory of evolution But in the children's reactions to the inert but alive Galapagos turtle, the idea of the "original" is in crisis.

I have long believed that in the culture of simulation, the notion of authenticity is for us what sex was to the Victorians — "threat and obsession, taboo and fascination." I have lived with this idea for many years, yet at the museum, I find the children's position startling, strangely unsettling. For these children, in this context, aliveness seems to have no intrinsic value. Rather, it is useful only if needed for a specific purpose. "If you put in a robot instead of the live turtle, do you think people should be told that the turtle is not alive?" I ask. Not really, say several of the children. Data on "aliveness" can be shared on a "need to know" basis, for a purpose. But what are the purposes of living things? When do we need to know if something is alive?

Consider another vignette from 2005: an elderly woman in a nursing home outside of Boston is sad. Her son has broken off his relationship with her. Her nursing home is part of a study I am conducting on robotics for the elderly. I am recording her reactions as she sits with the robot Paro, a seal-like creature, advertised as the first "therapeutic robot" for its ostensibly positive effects on the ill, the elderly, and the emotionally troubled. Paro is able to make eye contact through sensing the direction of a human voice, is sensitive to touch, and has "states of mind" that are affected by how it is treated, for example, is it stroked gently or with agressivity? In this session with Paro, the woman, depressed because of her son's abandonment, comes to believe that the robot is depressed as well. She turns to Paro, strokes him and says: "Yes, you're sad, aren't you. It's tough out there. Yes, it's hard." And then she pets the robot once again, attempting to provide it with comfort. And in so doing, she tries to comfort herself.

The woman's sense of being understood is based on the ability of computational objects like Paro to convince their users that they are in a relationship. I call these creatures (some virtual, some physical robots) "relational artifacts." Their ability to inspire relationship is not based on their intelligence or consciousness, but on their ability to push certain "Darwinian" buttons in people (making eye contact, for example) that make people respond as though they were in relationship. For me, relational artifacts are the new uncanny in our computer culture — as Freud once put it, the long familiar taking a form that is strangely unfamiliar. As such, they confront us with new questions.

What does this deployment of "nurturing technology" at the two most dependent moments of the life cycle say about us? What will it do to us? Do plans to provide relational robots to attend to children and the elderly make us less likely to look for other solutions for their care? People come to feel love for their robots, but if our experience with relational artifacts is based on a fundamentally deceitful interchange, can it be good for us? Or might it be good for us in the "feel good" sense, but bad for us in our lives as moral beings?

Relationships with robots bring us back to Darwin and his dangerous idea: the challenge to human uniqueness. When we see children and the elderly exchanging tendernesses with robotic pets the most important question is not whether children will love their robotic pets more than their real life pets or even their parents, but rather, what will loving come to mean?


STEVEN STROGATZ
Applied mathematician, Cornell University; Author, Sync

The End of Insight

I worry that insight is becoming impossible, at least at the frontiers of mathematics. Even when we're able to figure out what's true or false, we're less and less able to understand why.

An argument along these lines was recently given by Brian Davies in the "Notices of the American Mathematical Society". He mentions, for example, that the four-color map theorem in topology was proven in 1976 with the help of computers, which exhaustively checked a huge but finite number of possibilities. No human mathematician could ever verify all the intermediate steps in this brutal proof, and even if someone claimed to, should we trust them? To this day, no one has come up with a more elegant, insightful proof. So we're left in the unsettling position of knowing that the four-color theorem is true but still not knowing why.

Similarly important but unsatisfying proofs have appeared in group theory (in the classification of finite simple groups, roughly akin to the periodic table for chemical elements) and in geometry (in the problem of how to pack spheres so that they fill space most efficiently, a puzzle that goes back to Kepler in the 1500's and that arises today in coding theory for telecommunications).

In my own field of complex systems theory, Stephen Wolfram has emphasized that there are simple computer programs, known as cellular automata, whose dynamics can be so inscrutable that there's no way to predict how they'll behave; the best you can do is simulate them on the computer, sit back, and watch how they unfold. Observation replaces insight. Mathematics becomes a spectator sport.

If this is happening in mathematics, the supposed pinnacle of human reasoning, it seems likely to afflict us in science too, first in physics and later in biology and the social sciences (where we're not even sure what's true, let alone why).

When the End of Insight comes, the nature of explanation in science will change forever. We'll be stuck in an age of authoritarianism, except it'll no longer be coming from politics or religious dogma, but from science itself.


TERRENCE SEJNOWSKI
Computational Neuroscientist, Howard Hughes Medical Institute; Coauthor, The Computational Brain

When will the Internet become aware of itself? 

I never thought that I would become omniscient during my lifetime, but as Google continues to improve and online information continues to expand I have achieved omniscience for all practical purposes. The Internet has created a global marketplace for ideas and products, making it possible for individuals in the far corners of the world to automatically connect directly to each other. The Internet has achieved these capabilities by growing exponentially in total communications bandwidth. How does the communications power of the Internet compare with that of the cerebral cortex, the most interconnected part of our brains?

Cortical connections are expensive because they take up volume and cost energy to send information in the form of spikes along axons. About 44% of the cortical volume in humans is taken up with long-range connections, called the white matter. Interestingly, the thickness of gray matter, just a few millimeters, is nearly constant in mammals that range in brain volume over five orders of magnitude, and the volume of the white matter scales approximately as the 4/3 power of the volume of the gray matter. The larger the brain, the larger the fraction of resources devoted to communications compared to computation.

However, the global connectivity in the cerebral cortex is extremely sparse: The probability of any two cortical neurons having a direct connection is around one in a hundred for neurons in a vertical column 1 mm in diameter, but only one in a million for more distant neurons.  Thus, only a small fraction of the computation that occurs locally can be reported to other areas, through a small fraction of the cells that connect distant cortical areas.

Despite the sparseness of cortical connectivity, the potential bandwidth of all of the neurons in the human cortex is approximately a terabit per second, comparable to the total world backbone capacity of the Internet. However, this capacity is never achieved by the brain in practice because only a fraction of cortical neurons have a high rate of firing at any given time. Recent work by Simon Laughlin suggests that another physical constraint — energy — limits the brain's ability to harness its potential bandwidth.  

The cerebral cortex also has a massive amount of memory. There are approximately one billion synapses between neurons under every square millimeter of cortex, or about one hundred million million synapses overall. Assuming around a byte of storage capacity at each synapse (including dynamic as well as static properties), this comes to a total of 1015 bits of storage. This is comparable to the amount of data on the entire Internet; Google can store this in terabyte disk arrays and has hundreds of thousands of computers simultaneously sifting through it.

Thus, the internet and our ability to search it are within reach of the limits of the raw storage and communications capacity of the human brain, and should exceed it by 2015.

Leo van Hemmen and I recently asked 23 neuroscientists to think about what we don't yet know about the brain, and to propose a question so fundamental and so difficult that it could take a century to solve, following in the tradition of Hilbert's 23 problems in mathematics. Christof Koch and Francis Crick speculated that the key to understanding consciousness was global communication:  How do neurons in the diverse parts of the brain manage to coordinate despite the limited connectivity?  Sometimes, the communication gets crossed, and V. S. Ramachandran and Edward Hubbard asked whether synesthetes, rare individuals who experience crossover in sensory perception such as hearing colors, seeing sounds, and tasting tactile sensations, might give us clues to how the brain evolved.

There is growing evidence that the flow of information between parts of the cortex is regulated by the degree of synchrony of the spikes within populations of cells that represent perceptual states. Robert Desimone and his colleagues have examined the effects of attention on cortical neurons in awake, behaving monkeys and found the coherence between the spikes of single neurons in the visual cortex and local field potentials in the gamma band, 30-80 Hz, increased when the covert attention of a monkey was directed toward a stimulus in the receptive field of the neuron. The coherence also selectively increased when a monkey searched for a target with a cued color or shape amidst a large number of distracters. The increase in coherence means that neurons representing the stimuli with the cued feature would have greater impact on target neurons, making them more salient.

The link between attention and spike-field coherence raises a number of interesting questions. How does top-down input from the prefrontal cortex regulate the coherence of neurons in other parts of the cortex through feedback connections? How is the rapidity of the shifts in coherence achieved?  Experiments on neurons in cortical slices suggest that inhibitory interneurons are connected to each other in networks and are responsible for gamma oscillations. Researchers in my laboratory have used computational models to show that excitatory inputs can rapidly synchronize a subset of the inhibitory neurons that are in competition with other inhibitory networks.  Inhibitory neurons, long thought to merely block activity, are highly effective in synchronizing neurons in a local column already firing in response to a stimulus.

The oscillatory activity that is thought to synchronize neurons in different parts of the cortex occurs in brief bursts, typically lasting for only a few hundred milliseconds. Thus, it is possible that there is a packet structure for long-distance communication in the cortex, similar to the packets that are used to communicate on the Internet, though with quite different protocols. The first electrical signals recorded from the brain in 1875 by Richard Caton were oscillatory signals that changed in amplitude and frequency with the state of alertness. The function of these oscillations remains a mystery, but it would be remarkable if it were to be discovered that these signals held the secrets to the brain's global communications network.

Since its inception in 1969, the Internet has been scaled up to a size not even imagined by its inventors, in contrast to most engineered systems, which fall apart when they are pushed beyond their design limits. In part, the Internet achieves this scalability because it has the ability to regulate itself, deciding on the best routes to send packets depending on traffic conditions. Like the brain, the Internet has circadian rhythms that follow the sun as the planet rotates under it. The growth of the Internet over the last several decades more closely resembles biological evolution than engineering.

How would we know if the Internet were to become aware of itself?  The problem is that we don't even know if some of our fellow creatures on this planet are self aware. For all we know the Internet is already aware of itself.



< previous

1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |10 | 11 | 12

next >

John Brockman, Editor and Publisher
Russell Weinberger, Associate Publisher

contact: [email protected]
Copyright © 2006 by
Edge Foundation, Inc
All Rights Reserved.

|Top|