The 5th Annual Edge Question reflects the spirit of the Edge motto: "To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves."
2002 Edge Question is:
IS YOUR QUESTION? ... WHY?"
I have asked Edge contributors for "hard-edge" questions, derived from empirical results or experience specific to their expertise, that render visible the deeper meanings of our lives, redefine who and what we are. The goal is a series of interrogatives in which "thinking smart prevails over the anaesthesiology of wisdom."
Happy New Year!
|Responses (in order received): Kevin Kelly Paul Davies Stuart A. Kauffman Alison Gopnik John Horgan Daniel C. Dennett Derrick De Kerkhove Clifford A. Pickover John McCarthy Douglas Rushkoff William Calvin Timothy Taylor Marc D. Hauser Roger Schank James J. O'Donnell Robert Aunger Lawrence Krauss Jaron Lanier Freeman Dyson Lance Knobel Robert Sapolsky Mark Stahlman Andy Clark Sylvia Paull Todd Feinberg, MD Nicholas Humphrey Terrence Sejnowski Howard Lee Morgan Judith Rich Harris Martin Rees Paul Bloom Margaret Wertheim George Dyson Todd Siler Chris Anderson Gerd Stern Alan Alda Henry Warwick Delta Willis John Skoyles Paul Davies Piet Hut Julian Barbour Antony Valentini Stephen Grossberg Rodney Brooks Karl Sabbagh David G. Myers John D. Barrow Milford H. Wolpoff Richard Dawkins David Deutsch Joel Garreau Gregory Benford Eduardo Punset Gary F. Marcus Steve Grand Seth Lloyd John Markoff Michael Shermer Jordan B. Pollack Steven R. Quartz David Gelernter Samuel Barondes Steven Pinker Frank Schirrmacher Leon Lederman Howard Gardner Esther Dyson Keith Devlin Richard Nisbett Stephen Schneider Robert Provine Sir John Maddox Carlo Rovelli Tor Nørretranders David Buss John Allen Paulos Dan Sperber W. Daniel Hillis Brian Eno Anton Zeilinger Eberhard Zangger Mark Hurst Stuart Pimm James Gilligan Brian Greene Rafael Núñez J. Doyne Farmer Ray Kurzweil Randolph Nesse Adrian Scott Tracy Quan Xeni Jardin Stanislas Dehaene Paul Ewald George Lakoff David Berreby Jared Diamond|
In a time when culture was still not numbered, the Count of Thüringen invited his nobles to the "Singers' War at the Wartburg," where he asked questions (if we are to believe Richard Wagner) that would bring glory, the most famous of which queried, "Could you explain to me the nature of love?" The publisher and literary agent, John Brockman, who now organizes singers' wars on the Internet, enjoys latching on to this tradition at the beginning of every year. (FAZ, January 9, 2001). His Tannhäuser may be named Steven Pinker, and his Wolfram von Eschenbach may go by Richard Dawkins, but it would do us well to trust that they and their compatriots could also turn out speculation on the count's favorite theme. Brockman's thinkers of the "Third Culture," whether they, like Dawkins, study evolutionary biology at Oxford or, like Alan Alda, portray scientists on Broadway, know no taboos. Everything is permitted, and nothing is excluded from this intellectual game. But in the end, as it takes place in its own Wartburg, reached electronically at www.edge.org, it concerns us and our unexplained and evidently inexplicable fate. In this new year Brockman himself doesn't ask, but rather once again facilitates the asking of questions. The contributions can be found from today onwards on the Internet. In conjunction with the start of the forum we are printing a selection of questions and commentary, at times in somewhat abridged form, in German translation. .... [click here]
F.A.Z. Frankfurter Allgemeine Zeitung, 14.01.2002, Nr. 11 / Seite 38
In order received
is your heresy?"
Kevin Kelly is Editor-At-Large for Wired Magazine and author of New Rules for the New Economy.
or multiverse, that is the question?"
The multiverse has replaced God as an explanation for the appearance of design in the structure of the physical world. Like God, the agency concerned lies beyond direct observation, inferred by inductive reasoning from the properties of the one universe we do see.
The meta-question is, does the existence of these other universes amount to more than an intellectual exercise? Can we ever discover that the hypothesized alternative universes are really there? If not, is the multiverse not simply theology dressed up in techno jargon? And finally, could there be a Third Way, in which the ingenious features of the universe are explained neither by an Infinite Designer Mind, nor by an Infinite Invisible Multiverse, but by an entirely new principle of explanation.
Paul Davies, a physicist, writer and broadcaster, now based in South Australia, is author of How to Build a Time Machine.
must a physical system be to be able to act on its own behalf?"
Stuart A. Kauffman, an emeritus professor of biochemistry at UPenn, is a theoretical biologist and author of Investigations.
do we ask questions?"
Developmental research suggests that this drive for explanation is, in fact, in place very early in human life. We've all experienced the endless "whys?" of three-year-olds and the downright dangerous two-year-old determination to seek out strange new worlds and boldly go where no toddler has gone before. More careful analyses and experiments show that children's questions and explorations are strategically designed, in quite clever ways, to get the right kind of answers. In the case of human beings, evolution seems to have discovered that it's cost-effective to support basic research, instead of just funding directed applications. Human children are equipped with extremely powerful learning mechanisms, and a strong intrinsic drive to seek explanations. Moreover, they come with a support staff, parents and other caregivers who provide both lunch and references to the results of previous generations of human researchers.
But this preliminary answer prompts yet more questions. Why is it that in adult life, the same quest for explanatory truth so often seems to be satisfied by the falsehoods of superstition and religion? (Maybe we should think of these institutions as the cognitive equivalent of fast food. Fast food gives us the satisfying tastes of fat and sugar that were once evolutionary markers of good food sources, without the nourishment. Religion gives us the illusion of regularity and order, evolutionary markers of truth, without the substance.)
Why does this intrinsic truth-seeking drive seem to vanish so dramatically when children get to school? And, most important, how is it possible for children to get the right answers to so many questions so quickly? What are the mechanisms that allow human children to be the best learners in the known universe? Answering this question would not only tell us something crucial about human nature, it might give us new technologies that would allow even dumb adults to get better answers to our own questions.
Alison Gopnik is a professor of psychology at the University of California at Berkeley and coauthor of The Scientist In The Crib.
we want the God machine?"
Persinger's machine is actually quite crude. It induces peculiar perceptual distortions but no classic mystical experiences. But what if, through further advances in neuroscience and other fields, scientists invent a God machine that actually works, that delivers satori, nirvana, to anyone on command, without any negative side effects? It doesn't have to be an electromagnetic brain-stimulating device. It could be a drug, a type of brain surgery, a genetic modification, or some combination thereof.
One psychedelic researcher recently suggested to me that enlightenment could be spread around the world by an infectious virus that boosts the brain's production of dimethyltryptamine, a endogenous psychedelic that the Nobel laureate Julius Axelrod of the National Institutes of Health detected in trace amounts in human brain tissue in 1972. But whatever form the God machine takes, it would be powerful enough to transform the world into what Robert Thurman, an authority on Tibetan Buddhism (and father of Uma), calls the "Buddhaverse," a mystical utopia in which everyone is enlightened.
The obvious followup question: Would the invention of a genuine God machine spell our salvation or doom?
John Horgan is a freelance writer and author of The Undiscovered Mind.
kind of system of 'coding' of semantic information does the brain
What kind of system of "coding" of semantic information does the brain use? We have many tantalizing clues but no established model that comes close to exhibiting the molar behavior that is apparently being seen in the brain. In particular, we see plenty of evidence of a degree of semantic localization neural assemblies over here are involved in cognition about faces and neural assemblies over there are involved in cognition about tools or artifacts, etc and yet we also have evidence (unless we are misinterpreting it) that shows the importance of "spreading activation," in which neighboring regions are somehow enlisted to assist with currently active cognitive projects. But how could a region that specializes in, say, faces contribute at all to a task involving, say, food, or transportation or . . . . ? Do neurons have two (or more) modes of operation specialized, "home territory" mode, in which their topic plays a key role, and generalized, "helping hand" mode, in which they work on other regions' topics?
Alternatively, is the semantic specialization we have observed an illusion are these regions only circumstantially implicated in these characteristic topics because of some as-yet-unanalyzed generalized but idiosyncratic competence that happens to be invoked usually when those topics are at issue? (The mathematician's phone rings whenever the topic is budgets, but he knows nothing about money; he's just good at arithmetic.) Or, to consider another alternative, is "spreading activation" mainly just noisy leakage, playing no contributing role in the transformation of content? Or is it just "political" support, contributing no content but helping to keep competing projects suppressed for awhile? And finally, the properly philosophical question: what's wrong with these questions and what would better questions be?
Daniel C. Dennett is Distinguished Arts and Sciences Professor at Tufts University and author of Darwin's Dangerous Idea.
"'To be or not to be' remains
But I can never retain that
amazing feeling for long. What is required is a kind of radical
pull-back of oneself from the most banal evidence of life and reality.
Jean-Paul Sartre, after Shakespeare, was probably the thinker who
framed the question best in his novels and philosophical treatises.
The issue, however, is that this question is profoundly existential,
not merely philosophical. It can be asked and should be by any living,
thinking, sentient being, but cannot be answered.
Derrick de Kerckhove is Director of the McLuhan Program at the University of Toronto and author of Connected Intelligence.
you choose universe Omega or Upsilon?"
Clifford A. Pickover is a researcher at IBM's T. J. Watson Research Center and author of The Paradox of God and the Science of Omniscience.
are behaviors encoded in DNA?"
John McCarthy is Professor of Computer Science at Stanford University.
do we tell stories?"
At the very least, narratives are less dangerous when we are free to participate in their writing. I'll venture that it is qualitatively better for human beings to take an active role in the unfolding of our collective story than it is to adhere blindly to the testament of our ancestors or authorities.
But what of moving out of the narrative altogether? Is it even possible? Is our predisposition for narrative physiological, psychological, or cultural?
Is it an outmoded form of cognition that yields only bloody clashes when competing myths are eventually mistaken for irreconcilable realities? Or are stories the only way we have of interpreting our world meaning that the forging of a collective set of mutually tolerant narratives is the only route to a global civilization?
Douglas Rushkoff is a Professor of Media Culture at New York University's Interactive Telecommu-nications Program and author of Coercion: Why We Listen to What "They" Say.
What makes coherence so important to us?"
William Calvin is a theoretical neurobiologist at the University of Washington and author of How Brains Think.
morality relative or absolute?"
The 'ethics of care', first developed within feminist philosophy, moves beyond these positions. Instead of connecting morals either to religious rules and principles or reductive natural laws, it values shared human capacities, such as intimacy, sympathy, trust, fidelity, and compassion. Such an ethics might elide the distinction between relative and absolute by promoting species-wide common sense. Before we judge the prospect of my question vanishing as either optimistic or naïve, we must scrutinize the alternatives carefully.
Timothy Taylor is an archaeologist at University of Bradford, UK, and author of The Prehistory of Sex: Four Million Years of Human Sexual Culture.
will the sciences of the mind constrain our theories and policies
One might imagine that if educators attempted to push this system first teaching children that 40 is a better answer to 25 + 12 than is 60 that it might well facilitate the acquisition of the more precise system later in development. Similar issues arise in attempting to teach children about physics and biology. At some level, then, there must be a way for those in the trenches to work together with those in the ivory tower to advance the process of learning, building on what we have discovered from the sciences of the mind.
Marc D. Hauser is an evolutionary psychologist, a professor at Harvard University and author of Wild Minds: What AnimalsThink.
While education is on every politician's agenda as an item of serious importance, it is astonishing that the notion of what it means to be educated never seems to come up. Our society, which is undergoing massive transformations almost on a daily basis never seems to transform its notion of what it means to be educated. We all seem to agree that an educated mind certainly entails knowing literature and poetry, appreciating history and social issues, being able to deal with matters of economics, being versatile in more than one language, understanding scientific principles and the basics of mathematics.
What I was doing in my last sentence was detailing the high school curriculum set down in 1892 by a committee chaired by the President of Harvard that was mandated for anyone who might want to enter a university. The curriculum they decided upon has not changed at all since then. Our implicit notions of an educated mind are the same as they were in the nineteenth century. No need to teach anything new, no need to reconsider how a world where a university education was offered solely to the elite might be different from a world in which a university degree is commonplace.
For a few years, in the early 90's, I was on the Board of Editors of the Encyclopedia Britannica. Most everyone else on the board were octogenarians the foremost of these, since he seemed to have everyone's great respect, was Clifton Fadiman, a literary icon of the 40's. When I tried to explain to this board the technological changes that were about to come that would threaten the very existence of the Encyclopedia, there was a general belief that technology would not really matter much. There would always be a need for the encyclopedia and the job of the board would always be to determine what knowledge was the most important to have. Only Clifton Fadiman seemed to realize that my predictions about the internet might have some effect on the institution they guarded. He concluded sadly, saying: "I guess we will just have to accept the fact that minds less well educated than our own will soon be in charge."
Note that he didn't say "differently educated," but "less well educated." For some years the literati have held sway over the commonly accepted definition of education. No matter how important science and technology seem to industry or government or indeed to the daily life of the people, as a society we believe that those educated in literature and history and other humanities are in some way better informed, more knowing, and somehow more worthy of the descriptor "well educated."
Now if this were an issue confined to those who run the elite universities and prep schools or those whose bible is the New York Review of Books, this really wouldn't matter all that much to anybody. But this nineteenth century conception of the educated mind weighs heavily on our notions of how we educate our young. We are not educating our young to work or to live in the nineteenth century, or at least we ought not be doing so. Yet, when universities graduate thousands of English and history majors because it can only be because we imagine that such fields form the basis of the educated mind. When we choose to teach our high schoolers trigonometry instead of say basic medicine or business skills, it can only be because we think that trigonometry is somehow more important to an educated mind or that education is really not about preparation for the real world. When we focus on intellectual and scholarly issues in high school as opposed to more human issues like communications, or basic psychology, or child raising, we are continuing to rely upon out dated notions of the educated mind that come from elitist notions of who is to be educated.
While we argue that an educated mind can reason, but curiously there are no courses in our schools that teach reasoning. When we say that an educated mind can see more than one side of an argument we go against the school system which holds that there are right answers to be learned and that tests can reveal who knows them and who doesn't.
Now obviously telecommunications is more important than basic chemistry and HTML is more significant than French in today's world. These are choices that have to be made, but they never will be made until our fundamental conception of erudition changes or until we realize that the schools of today must try to educate the students who actually attend them as opposed to the students who attended them in 1892.
The 21st century conception of an educated mind is based upon old notions of erudition and scholarship not germane to this century. The curriculum of the school system bears no relation to the finished products we seek. We need to rethink what it means to be educated and begin to focus on a new conception of the very idea of education.
Roger Schank is Distinguished Career Professor, School of Computer Science, Carnegie-Mellon University and author of Virtual Learning: A Revolutionary Approach to Building a Highly Skilled Workforce.
the benefits accruing to humankind (leaving aside questions of afterlife)
from the belief and practice of organized religions outweigh the
James J. O'Donnell is Professor of Classical Studies and Vice Provost at UPenn and author of Avatars of the Word: From Papyrus to Cyberspace.
technology going to 'wake up' or 'come alive' anytime in the future?"
This is a difficult question to answer, mostly because we don't currently have a very good idea about how technology evolves, so it's hard to predict future developments. But I believe that we can get some way toward an answer by adopting an approach currently being developed by some of our best evolutionary thinkers, such as John Maynard Smith, Eors Szathmary, and others. This "major transition" theory is concerned with determining the conditions under which new kinds of agents emerge in some evolutionary lineage. Examples of such transitions occurred when prokaryotes became eukaryotes, or single-celled organisms became multi cellular. In each case, previously independent biological agents evolved new methods of cooperation, with the result that a new level of organization and agency appeared in the world. This theory hasn't yet been applied to the evolution of technology, but could help to pinpoint important issues. In effect, what I want to investigate is whether the futures that disturb Bill Joy can be appropriately analyzed as major transitions in the evolution of technology. Given current trends in science and technology, can we say that a global brain is around the corner, or that nano-robots are going to conquer the Earth? That, at least, is my current project.
Robert Aunger is an evolutionary theorist and editor of Darwinizing Culture: The Status of Memetics as a Science.
Here I paraphrase Einstein's famous question: "Did God have any choice in the creation of the Universe". I get rid of the God part, which Einstein only added to make it seem more whimsical, I am sure, because that just confuses the issue. The important question, perhaps the most important question facing physics today is the question of whether there is only one consistent set of physical laws that allow a working universe, or rather whether the constants of nature are arbitrary, and could take any set of values. Namely, if we continue to probe into the structure of matter and the nature of elementary forces will we find that mathematical consistency is possible only for one unique theory of the Universe, or not? In the former case, of course, there is hope for an exactly predictive "theory of everything". In the latter case, we might expect that it is natural that our Universe is merely one of an infinite set of Universes within some grand multiverse, in each of which the laws of physics differ, and in which anthropic arguments may govern why we live in the Universe we do.
The goal of physics throughout the ages has been to explain exactly why the universe is the way it is, but as we push closer and closer to the ultimate frontier, we may find out that in fact the ultimate laws of nature may generically produce a universe that is quite different from the one we live in. This would force a dramatic shift in our concept of natural law.
Some may suggest that this question is mere philosophical nonsense, and is akin to asking how many angels may sit on the head of a pin. However, I think that if we are lucky it may be empirically possible to address it. If, for example, we do come up with some fundamental theory that predicts the values of many fundamental quantities correctly, but that predicts that other mysterious quantities, like the energy of empty space, is generically different than the value we measure, or perhaps is determined probabilistically, this will add strong ammunition to the notion that our universe is not unique, but arose from an ensemble of causally disconnected parts, each with randomly varying values of the vacuum energy.
In any case, answerable or not, I think this is the ultimate question in science.
Lawrence Krauss is Professor of Physics at Case Western Reserve University and the author of Atom.
got fundamental scientific theories (such as quantum theory and
relativity) that test out superbly, even if we don't quite know
how they all fit into a whole, but we're hung up trying to understand
complicated phenomena, like living things. How much complexity can
am I me?"
Freeman Dyson is professor of physics at the Institute for Advanced Study and author of The Sun, the Genome, and the Internet.
One of the great achievements of recent history has been a dramatic reduction in absolute poverty in the world. In 1820 about 85% of the world's population lived on the equivalent of a dollar a day (converted to today's purchasing power). By 1980, that percentage had dropped to 30%, but it is now down to 20%.
But that still means 1 billion people live in absolute poverty. A further 2 billion are little better off, living on $2 a day. A quarter of the world's people never get a cup of clean water.
Part of what globalisation means is that we have a reasonable chance of assuring that a majority of the world's people will benefit from continuing economic growth, improvements in health and education, and the untapped potential of the extraordinary technologies about which most of the Edge contributors write so eloquently.
We currently lack the political will to make sure that a vast number of people are not fenced off from this optimistic future. So my question poses a simple choice. Are we content to have two, increasingly estranged world? Or do we want to find the path to a unified, healthy world?
Lance Knobel is Adviser, Prime Minister's Forward Strategy Unit, London, and the former head of the program of the World Economic Forums' Annual meeting in Davos.
I've spent most of my career as a neurobiologist working on an area of the brain called the hippocampus. It's a fairly useful region it plays a critical role in learning and memory. It's the area that's damaged in Alzheimer's, in alcoholic dementia, during prolonged seizures or cardiac arrest. You want to have your hippocampus functioning properly. So I've spent all these years trying to figure out why hippocampal neurons die so easily and what you can do about it. That's fine, might even prove useful some day. But as of late, it's been striking me that I'm going to be moving in the direction of studying a part of the brain called the prefrontal cortex (PFC).
It's a fascinating part of the brain, the part of the brain that most defines us as humans. There's endless technical ways to describe what the PFC does, but as an informal definition that works pretty well, it's the closest thing we have to a superego. The PFC is what allows us to become potty trained early on. And it is responsible for squeezing our psychic sphincters closed as well. It keeps us from belching loudly at the quiet moment in the wedding ceremony, prevents us from telling our host just what we really think of the inedible meal they've served. It keeps us from having our murderous thoughts turn into murderous acts. And it plays a similar role in the cognitive realm the PFC stops us from falling into solving a problem with an answer that, while the easier, more reflexive one, is wrong. The PFC is what makes us do the right thing, even if it's harder.
Not surprisingly, it's one of the last parts of the brain to fully develop (technical jargon to fully myelinate). But what is surprising is just how long it is before the PFC comes fully on line astonishingly, around age 30. And this is where my question comes in. It is best framed in the context of young kids, and this is probably what has prompted me to begin to think about the PFC, as I have two young children. Kids are wildly "frontally disinhibited," the term for having a PFC that hasn't quite matured yet into keeping its foot firmly on the brake. Play hide and seek with a three year old, loudly, plaintively call, "Where are you," and their lack of frontal function does them in they can't stop themselves from calling out Here I am, under the table giving away their hiding spot. I suspect that there is a direct, near linear correlation between the number of fully myelinated frontal neurons in a small child's brain and how many dominoes you can line up in front of him before he must MUST knock them over.
So my question comes to the forefront in a scenario that came up frequently for me a few years ago: my then three year old who, while a wonderful child, was distinctly three, would do something reasonably appalling to his younger sister take some stuffed animal away, grab some contested food item, whatever. A meltdown then ensues. My wife or I intervene, strongly reprimanding our son for mistreating his sister. And then the other parent would say, "Well, is this really fair to be coming down on him like this?, after all, he has no frontal function yet, he can't stop himself" (my wife is a neuropsychologist so, pathetically, we actually speak this way to each other). And the other would retort "Well, how else is he going to develop that frontal function?"
That's the basic question how does the world of empathy, theory of mind, gratification postponement, Kohlberg stages of moral development, etc., combine with the world of neurotrophic growth factors stimulating neurons to grow fancier connections? How do they produce a PFC that makes you do the harder thing because it's right? How does this become a life-long pattern of PFC function
Robert Sapolsky is a professor of biological sciences at Stanford University and author of A Primate's Memoir.
It feels to me like something very important is going on. Clearly our children aren't quite like us. They don't learn about the world as we did. They don't storehouse knowledge about the world as we have. They don't "sense" the world as we do. Could humanity possibly already be in the middle of a next stage of cognitive transition?
Merlin Donald has done a fine job of summarizing hundreds of inquiries into the evolution of culture and cognition in his Origins of the Modern Mind. Here, as in his other work, he posits a series of "layered" morphological, neurological and external technological stages in this evolutionary path. What he refers to as the "Third Transition" (from "Mythic" to "Theoretic" culture), appears to have begun 2500 (or so) years ago and has now largely completed its march to "mental" dominance worldwide.
While this last "transition" did not require biological adaptation (or speciation), it nonetheless changed us neurologically and psycho-culturally. The shift from the "primary orality" of "Mythic culture" to the literacy and the reliance of what Donald calls an "External Symbolic Storage" network, has resulted in a new sort of mind. The "modern" mind.
Could we be "evolving" towards an even newer sort of mind as a result of our increasing dependence on newer sorts of symbolic networks and newer environments of technologies?
Literacy (while still taught and used) doesn't have anywhere near the clout it once had. Indeed, as fanatical "literalism" (aka "fundamentalism") thrashes its way to any early grave (along with the decline of the reciprocal fascination of the past 50 years to "deconstruct" everything as "texts"), how much will humanity care about and rely upon the encyclopedic storage of knowledge in alphabetic warehouses?
Perhaps we are already "learning," "knowing" and "sensing" the world in ways that presage something very different from the "modern" mind. Should we ask the children?
Mark Stahlman, a venture capitalist who has been focused on next generation computer/networking platforms, is co-founder the Newmedia Laboratory, NYNMA.
We thought we had this one nailed. Believing (rightly) that the physical world is all there is, the sciences of the mind re-invented thought and reason (and feeling) as information-processing events in the human brain. But this vision turns out to be either incomplete or fatally flawed. The neat and tidy division between a level of information processing (software) and of physicality (implementation) is useful when we deal with humanly engineered systems. We build such systems, as far as possible, to keep the levels apart. But nature was not guided by any such neat and tidy design principles. The ways that evolved creatures solve problems of anticipation, response, reasoning and perceiving seem to involve endless leakage and interweaving between motion, action, visceral (gut) response, and somewhat more detached contemplation. When we solve a jigsaw puzzle, we look, think, and categorise: but we also view the scene and pieces from new angles, moving head and body. And we pick pieces up and try them out. Real on-the-hoof human reason is like that through and through. Even the use of pen and paper to construct arguments displays the same complex interweaving of embodied action, perceptual re-encountering, and neural activity. Mind and body (and world) emerge as messily and continuously coupled partners in the construction of rational action.
But this leads to a very real problem, an impasse that is currently the single greatest roadblock in the attempts to construct a mature science of the mind. We cannot, despite the deep and crucial roles of body and world, understand the mind in quite the same terms as, say, an internal combustion engine. Where minds are concerned, it is the flow of contents (and feelings) that seems to matter. Yet if we prescind from the body and world, pitching our stories and models at the level of the information flows, we again lose sight of the distinctively human mind. We need the information-and-content based story to see the mind as, precisely, a mind. Yet we cannot do justice to minds like ours without including body, world (cognitive tools and other people) and motion in roles which are both genuinely cognitive yet thoroughly physical.
What we lack is a framework, picture, or model in terms of which to understand this larger system as the cognitive engine. All current stories are forced to one side (information flows) or the other (physical dynamics). Cognitive Science thus stands in a position similar to that of Physics in the early decades of the 20th century. What we lack is a kind of 'quantum theory' of the mind: a new framework that displays mind as mind, yet as body in action too.
Andy Clark is Professor of Philosophy and Cognitive Science at the University of Sussex, UK and the author of Being There: Putting Brain, Body and World Together Again.
Scientific advances now make it possible for a woman past normal child-bearing years to bear a child. Some of my high-tech friends who range from age 43 to almost 50 are either bearing children or plan to using in-vitro techniques. These women have postponed childbearing because of their careers, but they want to experience the joys of family that their male counterparts were able to share while still pursuing their professional goals an option far more difficult for the childbearer and primary care provider.
Many successful men start first, second, or third families later in their lives, so why should we criticize women who want to bear a first child, when, thanks to science, it is no longer "too late?"
Sylvia Paull is the founder of Gracenet (www.gracenet.net).
Last year, Steven Spielberg directed a film, based upon a Stanley Kubrick project, entitled "A.I. Artificial Intelligence". The film depicts a robotic child who develops human emotions. Is such a thing possible? Could a sufficiently complex and appropriately designed computer embody human emotions? Or is this simply a fanciful notion that the public and some scientists who specialize in artificial intelligence just wish could be true?
I dont think that computers will ever become conscious and I view Spielbergs depiction of a conscious feeling robot a good example of what might be called the "The Spielberg Principle" that states: When a Steven Spielberg film depicts a world-changing scientific event, the likelihood of that event actually occurring approaches zero." In other words, our wishes and imagination often have little to do with what is scientifically likely or possible. For example, although we might wish for contact with other beings in the universe as portrayed in the Spielberg movie "E.T", the astronomical distances between our solar system and the rest of the universe makes an E.T.-like visit extremely unlikely.
The film A.I. and the idea contained within it that robots could someday become conscious is another case in which our wishes exceed reality. Despite enormous advances in artificial intelligence, no computer is able to experience a pin prick like a simple frog, or get hungry like a rat, or become happy or sad like all of us carbon-based units. But why is this the case? It is my conjecture that this is because there are some features of being alive that makes mind, consciousness, and feelings possible. That is, only living things are capable of the markers of mind such as intentionality, subjectivity, and self-awareness. But the important question of the link between life and the creation of consciousness remains a great scientific mystery, and the answer will go a long way toward our understanding of what a mind actually is.
Todd E. Feinberg, MD is Chief, Yarmon Neurobehavior and Alzheimer's Disease Center, Beth Israel Medical Center
be or not to be?"
Yet the fact is that in the human case (and maybe the human case alone) natural selection has devised a peculiarly effective trick for persuading individual survival machines to fulfill this seemingly bleak role. Every human being is endowed with the mental programs for developing a "conscious self" or "soul": a soul which not only values its own survival but sees itself as very much an end in its own right (in fact a soul which, in a fit of solipsism, may even consider itself the one and only source of all the ends there are!). Such a soul, besides doing all it can to ensure its own basic comfort and security, will typically strive for self-development: through learning, creativity, spiritual growth, symbolic expression, consciousness-raising, and so on. These activities redound to the advantage of mind and body. The result is that such "selfish souls" do indeed make wonderful agents for "selfish genes".
There has, however, always been a catch. Naturally-designed "survival machines" are not, as the name might imply machines designed to go on and on surviving: instead they are machines designed to survive only up to a point this being the point where the genes they carry have nothing more to gain (or even things to lose) from continued life. For it"s a sobering fact that genes are generally better off taking passage and propagating themselves in younger machines than older ones (the older ones will have begun to accumulate defects, to have become set in their ways, to have acquired more than enough dependents, etc.) It suits genes therefore that their survival machines should have a limited life-time, after which they can be scrapped.
Thus, in a scenario that has all the makings of tragedy (if not a tragic farce), natural selection has, on the one hand, been shaping up individual human beings at the level of their souls to believe in themselves and their intrinsic worth, while on the other hand taking steps to ensure that these same individuals on the level of their bodies grow old and die and, most likely, since by this stage of a life the genes no longer have any interest in preventing it, to die miserably, painfully and in a state of dreadful disillusion.
However, here's the second catch. In order for this double-game that the genes are playing to be successful, it's essential that the soul they've designed does not see what's coming and realise the extent to which it has been duped, at least until too late. But this means preventing the soul, or at any rate cunningly diverting it, from following some of the very lines of inquiry on which it has been set up to place its hopes: looking to the future, searching for eternal truths, and so on. In Camus' words "Beginning to think is beginning to be undermined".
The history of human psychology and culture has revolved around this contradiction built into human nature. Science has not had much to say about it. But it may yet.
Nicholas Humhprey is a theoretical psychologist at the London School of Economics, and author of Leaps of Faith.
The brain remains highly active during sleep, so the simple explanation that we sleep in order to rest cannot be the whole story. Activity in the sleeping brain is largely hidden from us because very little that occurs during sleep directly enters consciousness. However, electrical recordings and more recently brain imaging experiments during slow-wave sleep have revealed highly ordered patterns of activity that are much more spatially and temporally coherent than brain activity during states of alertness. Slow-wave sleep alternates during the night with rapid eye sleep movement (REM) sleep, during which dreams occur and muscles are paralyzed. For the last 10 years my colleagues and I have been building computer models of interacting neurons that can account for rhythmic brain activity during sleep.
Computer models of the sleeping brain and recent experimental evidence point toward slow-wave sleep as a time during which brain cells undergo extensive structural reorganization. It takes many hours for the information acquired during the day to be integrated into long-term memory through biochemical reactions. Could it be that we go to sleep every night in order to remember better and think more clearly?
Introspection is misleading in trying to understand the brain in part because much of the processing that takes place to support seeing, hearing and decision-making is subconscious. In studying the brain during sleep when we are aware of almost nothing, we may get a better understanding of the brains secret life and uncover some of the elusive principles that makes the mind so illusive.
Terrence Sejnowski, a computational neurobiologist and Professor at the Salk Institute for Biological Studies, is a coauthor of Thalamocortical Assemblies: How Ion Channels, Single Neurons and Large-Scale Networks Organize Sleep Oscillations.
makes a genius, and how can we have more of them?"
Howard Morgan is Vice-Chairman, Idealab.
do people even identical twins differ from one another
But if George and Donald are like most identical twins, they aren't identical in personality. Identical twins are more alike than fraternal twins or ordinary siblings, but less alike than you would expect. One might be more meticulous than the other, or more outgoing, or more emotional. The weird thing is that the degree of similarity is the same, whether twins are reared together or apart. George and Donald, according to their grandfather, "not only have the same genes but also have the same environment and upbringing." And yet they are no more alike in personality than twins reared by two different sets of parents in two different homes.
We know that something other than genes is responsible for some of the variation in human personality, but we are amazingly ignorant about what it is and how it works. Well-designed research has repeatedly failed to confirm commonly held beliefs about which aspects of a child's environment are important. The evidence indicates that neither those aspects of the environment that siblings have in common (such as the presence or absence of a caring father) nor those that supposedly widen the differences between siblings (such as parental favoritism or competition between siblings) can be responsible for the non-genetic variation in personality. Nor can the vague idea of an "interaction" between genes and environment save the day. George and Donald have the same genes, so how can an interaction between genes and environment explain their differences?
Only two hypotheses are compatible with the existing data. One, which I proposed in my book The Nurture Assumption, is that the crucial experiences that shape personality are those that children have outside their home. Unfortunately, there is as yet insufficient evidence to support (or disconfirm) this hypothesis.
remaining possibility is that the unexplained variation in personality
is random. Even for reared-together twins, there are minor, random
differences in their experiences. I find it
If these random physical differences in the brain are responsible for some or all of the personality differences between identical twins, they must also be responsible for some or all of the non-genetic variation in personality among the rest of us. "All" is highly unlikely; "some" is almost certainly true. What remains in doubt is not whether, but how much.
The bottom line is that scientists will probably never be able to predict human behavior with anything close to certainty. Next question: Is this discouraging news or cause for celebration?
Judith Rich Harris is a developmental psychologist and author of The Nurture Assumption: Why Children Turn Out The Way They Do.
We do not know whether there are other universes. Perhaps we never shall. But I want to respond to Paul Davies' questions by arguing that "do other universes exist?" can be a genuine scientific question. Moreover, I shall outline why it is an interesting question; and why, indeed, I already suspect that the answer may be "yes".
First, a pre-emptive and trivial comment: if you define the universe as "everything there is", then by definition there cannot be others. I shall, however, follow the convention among physicists and astronomers, and define the "universe" as the domain of space-time that encompasses everything that astronomers can observe. Other "universes", if they existed, could differ from ours in size, content, dimensionality, or even in the physical laws governing them.
It would be neater, if other "universes" existed, to redefine the whole enlarged ensemble as "the universe", and then introduce some new term for instance "the metagalaxy" for the domain that cosmologists and astronomers have access to. But so long as these concepts remain so conjectural, it is best to leave the term "universe" undisturbed, with its traditional connotations, even though this then demands a new word, the "multiverse", for a (still hypothetical) ensemble of "universes."
Status Of Other Universes
There is actually a blurred transition between the readily observable and the absolutely unobservable, with a very broad grey area in between. To illustrate this, one can envisage a succession of horizons, each taking us further than the last from our direct experience:
This step-by-step argument (those who don't like it might dub it a slippery slope argument!) suggests that whether other universes exist or not is a scientific question. But it is of course speculative science. The next question is, can we put it on a firmer footing? What could it explain?
Scenarios For A Multiverse
At first sight, nothing seems more conceptually extravagant more grossly in violation of Ockham's Razor than invoking multiple universes. But this concept is a natural consequence of several different theories ( albeit all speculative). Andrei Linde, Alex Vilenkin and others have performed computer simulations depicting an "eternal" inflationary phase where many universes sprout from separate big bangs into disjoint regions of spacetimes. Alan Guth and Lee Smolin have, from different viewpoints, suggested that a new universe could sprout inside a black hole, expanding into a new domain of space and time inaccessible to us. And Lisa Randall and Raman Sundrum suggest that other universes could exist, separated from us in an extra spatial dimension; these disjoint universes may interact gravitationally, or they may have no effect whatsoever on each other.
There could be another universe just a few millimetres away from us. But if those millimetres were measured in some extra spatial dimension then to us (imprisoned in our 3-dimensional space) the other universe would be inaccessible. In the hackneyed analogy where the surface of a balloon represents a two-dimensional universe embedded in our three-dimensional space, these other universes would be represented by the surfaces of other balloons: any bugs confined to one, and with no conception of a third dimension, would be unaware of their counterparts crawling around on another balloon. Variants of such ideas have been developed by Paul Steinhardt, Neil Turok and others. Guth and Edward Harrison have even conjectured that universes could be made in some far-future laboratory, by imploding a lump of material to make a small black hole. Could our entire universe perhaps then be the outcome of some experiment in another universe? If so, the theological arguments from design could be resuscitated in a novel guise. Smolin speculates that the daughter universe may be governed by laws that bear the imprint of those prevailing in its parent universe. If that new universe were like ours, then stars, galaxies and black holes would form in it; those black holes would in turn spawn another generation of universes; and so on, perhaps ad infinitum.
Parallel universes are also invoked as a solution to some of the paradoxes of quantum mechanics, in the "many worlds" theory, first advocated by Hugh Everett and John Wheeler in the 1950s. This concept was prefigured by Olaf Stapledon, in his 1937 novel, as one of the more sophisticated creations of his Star Maker: "Whenever a creature was faced with several possible courses of action, it took them all, thereby creating many ... distinct histories of the cosmos. Since in every evolutionary sequence of this cosmos there were many creatures and each was constantly faced with many possible courses, and the combinations of all their courses were innumerable, an infinity of distinct universes exfoliated from every moment of every temporal sequence". None of these scenarios has been simply dreamed up out of the air: each has a serious, albeit speculative, theoretical motivation. However, one of them, at most, can be correct. Quite possibly none is: there are alternative theories that would lead just to one universe. Firming up any of these ideas will require a theory that consistently describes the extreme physics of ultra-high densities, how structures on extra dimensions are configured, etc. But consistency is not enough: there must be grounds for confidence that such a theory isn't a mere mathematical construct, but applies to external reality. We would develop such confidence if the theory accounted for things we can observe that are otherwise unexplained. As the moment, we have an excellent framework, called the standard model, that accounts for almost all subatomic phenomena that have been observed. But the formulae of the "standard model" involve numbers which can't be derived from the theory but have to be inserted from experiment.
Perhaps, in the 21st-century theory, physicists will develop a theory that yields insight into (for instance) why there are three kinds of neutrinos, and the nature of the nuclear and electric forces. Such a theory would thereby acquire credibility. If the same theory, applied to the very beginning of our universe, were to predict many big bangs, then we would have as much reason to believe in separate universes as we now have for believing inferences from particle physics about quarks inside atoms, or from relativity theory about the unobservable interior of black holes.
Laws, Or Mere Bylaws?
As an analogy (one which I owe to Paul Davies) consider the form of snowflakes. Their ubiquitous six-fold symmetry is a direct consequence of the properties and shape of water molecules. But snowflakes display an immense variety of patterns because each is moulded by its micro-environments: how each flake grows is sensitive to the fortuitous temperature and humidity changes during its downward drift. If physicists achieved a fundamental theory, it would tell us which aspects of nature were direct consequences of the bedrock theory (just as the symmetrical template of snowflakes is due to the basic structure of a water molecule) and which are (like the distinctive pattern of a particular snowflake) the outcome of accidents. The accidental features could be imprinted during the cooling that follows the big bang rather as a piece of red-hot iron becomes magnetised when it cools down, but with an alignment that may depend on chance factors. It may turn out (though this would be a disappointment to many physicists if it did) that the key numbers describing our universe, and perhaps some of the so-called constants of laboratory physics as well, are mere "environmental accidents", rather than being uniquely fixed throughout the multiverse by some final theory. This is relevant to some now-familiar arguments (explored further in my book Our Cosmic Habitat) about the surprisingly fine-tuned nature of our universe.
Fine Tuning A Motivation For Suspecting That Our "Universe" Is One Of Many.
The nature of our universe depended crucially on a recipe encoded in the big bang, and this recipe seems to have been rather special. A degree of fine tuning in the expansion speed, the material content of the universe, and the strengths of the basic forces seems to have been a prerequisite for the emergence of the hospitable cosmic habitat in which we live. Here are some prerequisites for a universe containing organic life of the kind we find on Earth:
First of all, it must be very large compared to individual particles, and very long-lived compared with basic atomic processes. Indeed this is surely a requirement for any hypothetical universe that a science fiction writer could plausibly find interesting. If atoms are the basic building blocks, then clearly nothing elaborate could be constructed unless there were huge numbers of them. Nothing much could happen in a universe that was was too short-lived: an expanse of time, as well as space, is needed for evolutionary processes. Even a universe as large and long-lived as ours, could be very boring: it could contain just black holes, or inert dark matter, and no atoms at all; it could even be completely uniform and featureless. Moreover, unless the physical constants lie in a rather narrow range, there would not be the variety of atoms required for complex chemistry.
If our existence depends on a seemingly special cosmic recipe, how should we react to the apparent fine tuning? There seem three lines to take: we can dismiss it as happenstance; we can acclaim it as the workings of providence; or (my preference) we can conjecture that our universe is a specially favoured domain in a still vaster multiverse. Some seemingly "fine tuned" features of our universe could then only be explained by "anthropic" arguments, which are analogous to what any observer or experimenter does when they allow for selection effects in their measurements: if there are many universes, most of which are not habitable, we should not be surprised to find ourselves in one of the habitable ones.
Specific Multiverse Theories Here And Now
We could apply this style of reasoning to the important numbers of physics (for instance, the cosmological constant lambda) to test whether our universe is typical of the subset that that could harbour complex life. Lambda has to be below a threshold to allow protogalaxies to pull themselves together by gravitational forces before gravity is overwhelmed by cosmical repulsion (which happens earlier if lambda is large). An unduly fierce cosmic repulsion would prevent galaxies from forming.
Suppose, for instance, that (contrary to current indications) lambda was thousands of times smaller than it needed to be merely to ensure that galaxy formation wasn't prevented. This would raise suspicions that it was indeed zero for some fundamental reason. (Or that it had a discrete set of possible values, and all the others were well about the threshold).
The methodology requires us to decide what values of a particular physical parameter are compatible with our emergence. It also requires a specific theory that gives the relative Bayesian priors for any particular value. For instance, in the case of lambda, are all values equally probable? Are low values favoured by the physics? Or is there a finite number of discrete possible values, depending on how the extra dimensions "roll up"? With this information, one can then ask if our actual universe is "typical" of the subset in which we could have emerged. If it is a grossly atypical member even of this subset (not merely of the entire multiverse) then we would need to abandon our hypothesis. By applying similar arguments to the other numbers, we could check whether our universe is typical of the subset that that could harbour complex life. If so, the multiverse concept would be corroborated.
As another example of how "multiverse" theories can be tested, consider Smolin's conjecture that new universes are spawned within black holes, and that the physical laws in the daughter universe retain a memory of the laws in the parent universe: in other words there is a kind of heredity. Smolin's concept is not yet bolstered by any detailed theory of how any physical information (or even an arrow of time) could be transmitted from one universe to another. It has, however, the virtue of making a prediction about our universe that can be checked. If Smolin were right, universes that produce many black holes would have a reproductive advantage, which would be passed on to the next generation. Our universe, if an outcome of this process, should therefore be near-optimum in its propensity to make black holes, in the sense that any slight tweaking of the laws and constants would render black hole formation less likely. (I personally think Smolin's prediction is unlikely be borne out, but he deserves our thanks for presenting an example that illustrates how a multiverse theory can in principle be vulnerable to disproof.) These examples show that some claims about other universes may be refutable, as any good hypothesis in science should be. We cannot confidently assert that there were many big bangs we just don't know enough about the ultra-early phases of our own universe. Nor do we know whether the underlying laws are "permissive": settling this issue is a challenge to 21st century physicists. But if they are, then so-called anthropic explanations would become legitimate indeed they'd be the only type of explanation we'll ever have for some important features of our universe.
Some theorists have a strong prior preference for the simplest universe and are upset by these developments. It now looks as thought a craving for such simplicity will be disappointed. Perhaps we can draw a parallel with debates that occurred 400 years ago. Kepler discovered that planets moved in ellipses, not circles. Galileo was upset by this. In his "Dialogues concerning the two chief systems of the world" he wrote "For the maintenance of perfect order among the parts of the Universe, it is necessary to say that movable bodies are movable only circularly".
To Galileo, circles seemed more beautiful; and they were simpler they are specified just by one number, the radius, whereas an ellipse needs an extra number to define its shape (the "eccentricic"). Newton later showed, however, that all elliptical orbits could be understood by a single unified theory of gravity. Had Galileo still been alive when Principia was published, Newton's insight would surely have joyfully reconciled him to ellipses.
The parallel is obvious. A universe with at least three very different ingredients low may seem ugly and complicated. But maybe this is our limited vision. Our Earth traces out just one ellipse out of an infinity of possibilities, its orbit being constrained only by the requirement that it allows an environment conducive for evolution (not getting too close to the Sun, nor too far away). Likewise, our universe may be just one of an ensemble of all possible universes, constrained only by the requirement that it allows our emergence. So I'm inclined to go easy with Occam's razor: a bias in favour of "simple" cosmologies may be as short-sighted as was Galileo's infatuation with circles.
What we've traditionally called "the universe" may be the outcome of one big bang among many, just as our Solar System is merely one of many planetary systems in the Galaxy. Just as the pattern of ice crystals on a freezing pond is an accident of history, rather than being a fundamental property of water, so some of the seeming constants of nature may be arbitrary details rather than being uniquely defined by the underlying theory. The quest for exact formulas for what we normally call the constants of nature may consequently be as vain and misguided as was Kepler's quest for the exact numerology of planetary orbits. And other universes will become part of scientific discourse, just as "other worlds" have been for centuries. We may one day have a convincing theory that accounts for the very beginning of our universe, tells us whether a multiverse exists, and (if so) whether some so called laws of nature are just parochial by-laws in our cosmic patch. may be vastly larger than the domain we can now (or, indeed, can ever) observe. Most physicists hope to discover a fundamental theory that will offer unique formulae for all the constants of nature. But perhaps what we've traditionally called our universe is just an atom in an ensemble a multiverse punctuated by repeated big bangs, where the underlying physical laws permit diversity among the individual universes.
Even though some physicists still foam at the mouth at the prospects of be being "reduced" to these so-called anthropic explanations, such explanations may turn out to be the best we can ever discover for some features of our universe (just as they are the best explanations we can offer for the shape and size of Earth's orbit). Cosmology will have become more like the science of evolutionary biology. Nonetheless (and here physicists should gladly concede to the philosophers), any understanding of why anything exists why there is a universe (or multiverse) rather than nothing remains in the realm of metaphysics.
Sir Martin Rees, a cosmologist, is Royal Society Professor at Kings College, Cambridge. He directs a research program at Cambridge's Institute of Astronomy. His most recent book is Our Cosmic Habitat.
will people think about the soul?"
You might think that this will soon change. After all, people once thought the earth is flat and that mental illness is caused by demonic possession. But the belief in the immaterial soul is different. It is rooted in our experience our gut feeling, after all, is not that we are bodies; it is that we occupy them. Even young children are dualists they appreciate and enjoy tales in which a person leaves his body and goes to faraway lands, or when the frog turns into a prince. And when they come to think about death, they readily accept that the soul lives on, drifting into another body or ascending to another world.
When the public hears about research into the neural basis of thought, they learn about specific findings: this part of the brain is involved in risk taking, that part is active when someone think about music, and so on. But the bigger picture is not yet generally appreciated, and it is an interesting question how people will react when it is. (We are seeing the first signs now, much of it in the recent work of novelists such Jonathan Franzen, David Lodge, and Ian McEwan). It might be that non-specialists will learn to live with the fact that their gut intuitions are mistaken, just as non-physicists accept that apparently solid objects are composed of tiny moving particles. But this may be optimistic. The notion that our souls are flesh is profoundly troubling, in large part because of what it means for the idea of life after death. The same sorts of controversies that raged over the study and teaching of evolution in the 20th century might well spill over to the cognitive sciences in the years to follow.
Paul Bloom is Professor of Psychology at Yale and author of How Children Learn the Meanings of Words (Learning, Development, and Conceptual Change).
Of course this is one of the oldest philosophical questions in science but still one of the most mysterious. For most of Western history the cannonical answer has been some version of Platonism, some variation on the esentially Pythagorean idea that the matherial universe has been formed according to a set of transcendent and a priori mathematical relations or laws. These relations/laws Pythagaoras himself called the divine armonia of the cosmos, and have often been referred to since as the "cosmic harmonies" or the "music of the spheres". For Pythagoras numbers were actually gods, and the quest for mathematical relations in nature was a quest for the divine archetypes by which he believed that matter had literally been in-formed. Throughout the age of science, and even today, most physicists seem to be Platonists. Many are even Pythagoreans, implicitly (if not always with much concious reflection) making an association between the mathematical laws of nature and a transcendent being. The common association today of a "theory of everything" with "the mind of God" is simply the latest efflourescence of a two and a half millenia-old tradition which has always viewed physics as a quasi-religious activity.
Can we get beyond Platonism in our understanding of nature's undeniable propensity to realize extraordinarily sophisticated mathematical relations? Although I began my own life in science as a Platonist I have come to believe that this philosophical position is insupportable. It is not a rationally justifiable position at all, but simply a faith. Which is fine if one is prepaared to admit as much, something few physicists seem willing to do. To believe in an a priori set of laws (perhaps even a single law) by which physical matter had to be informed seems to me just a disguised version of deism an outgrowth of Judeo-Christianity wrapped up in scientific language. I believe we should do better than this, that we should articulate (and need to articulate) a post-Platonist understanding of the so-called "laws of nature." It is a far from easy task, but not an impossible one. Just as mathematican Brian Rotman has put forward a post-Platonist account of mathematics we need to achieve a similar move for physics and our mathematical description of the world itself.
Margaret Wertheim is a science writer and commentator and the author of The Pearly Gates of Cyberspace: A History of Space from Dante to the Internet.
Now, we don't even have underground testing, TV has gone cable, wireless is going spread-spectrum, technology has grown microscopic, our children encrypt text with PGP and swap audio via MP3, and Wolfman Jack no longer broadcasts across the New Mexico desert at 50,000 watts.
Fermi's question is still worth asking and may not be the paradox we once thought.
George Dyson is a historian among futurists and the author of Darwin Among the Machines.
That question strikes me as being as infinitely perplexing and personal as, What's the meaning of life? But that's the beauty of its ambiguity, and the challenge I enjoy grasping at its slippery complexity.
Recent insights into the neural basis of memory have provided a couple of key pieces to the puzzle of learning. The neuropsychological research on "elaborative encoding," for example, has shown that the long-term retention of information involves a spontaneous, connection-making process that produces web-like associative linkages of evocative images, words, objects, events, ideas, sensory impressions and experiences.
insights have emerged from the exploratory work on learning that's
being conducted in the field of education and business, which involves
constructing multi-dimensional symbolic models. The symbolic modeling
process enables people to give form to their thoughts, ideas,
knowledge, and viewpoints. By making tangible the unconscious
creative process by which we use our tacit and explicit knowledge,
the symbolic models help reveal what we think, how we think and
what we remember. They represent our thought processes in a deep
and comprehensive way, showing the different ways we use our many
intelligences, styles of learning, and creative inquiry. In effect,
the models demonstrate how people create things to remember, and
remember things by engaging in a form of physical thinking.
As Dr. Barry Gordon of Johns Hopkins School of Medicine states, "What we think of as memories are ultimately patterns of connection among nerve cells." The Harvard psychologist Daniel Schachter arrived at a similar conclusion when examining the 'unconscious processes of implicit knowledge' and its relation to memory.
when our brains are engaged by information that, literally and figuratively
speaking, "connects with us" (in more ways than one), we not only
remember it better, but tend to creatively act on it as well. Symbolic
modeling makes this fact self-evident.
answers remain to be seen in our connection-making process. This
private act of creation is becoming increasingly more public and
apparent through functional MRI studies and other medical imaging
techniques. Perhaps a more productive strategy for illuminating
this connection-making process would be to combine these high-tech
"windows" to the world of the mind with low-tech imaging tools,
such as symbolic modeling. The combination of these tools would
provide a more comprehensive picture of learning.
Todd Siler is the founder and director of Psi-Phi Communications and author of Think Like A Genius.
religious figures have appealed to people to overrule their greed
with a concern for some higher good. In our supposed scientific
age, these arguments have lost their force. Instead our public affairs
are governed by the idea that people should just be free as much
as possible to choose what they want.
Chris Anderson is the incoming Chairman and Host of the TED Conference (Technology, Education, Design) held each February in Monterey, California and formerly a magazine publisher (Future Publishing).
(As a poet, I don't think I need to explicate the question.)
Gerd Stern is a poet, media artist and cheese maven and the author of an oral history From Beat Scene Poet to Psychedelic Multimedia Artist 1948-1978.
"What is the nature of fads, fashions, crazes, and financial manias? Do they share a structure that can in turn be found at the core of more substantial changes in a culture? In other words, is there an engine of change to be found in the simple fad that can explain and possibly predict or accelerate broader changes that we regard as less trivial than "mere" fads? And more importantly, can we quantify the workings of this engine if we decide that it exists?"
I have shelves of books and papers by smart people who have brushed up against the edge of this question but who have seldom attacked it head on. I'm drawn to the question, and have been obsessed with it for years, because I think it's one of the big ones. It touches on everything humans do.
Fashions and fads are everywhere; in things as diverse as food, furnishings, clothes, flowers, children's names, haircuts, body image, even disease symptoms and surgical operations. Apparently, even the way we see Nature and frame questions about it is affected to some extent by fashion; at least according to those who would like to throw cold water on somebody else's theory. (In the current discussion, Paul Davies says, "Of late, it is fashionable among leading physicists and cosmologists to suppose that alongside the physical world we see lies a stupendous array of alternative realities")
But the ubiquity of fads has not led to deep understanding, even though there are serious uses to which a working knowledge of fads could be put. A million children each year die of dehydration, often where rehydration remedies are available. What if rehydration became fashionable among those children's mothers? Public health officials have many times tried to make various behaviors fashionable. In promoting the use of condoms in the Philippines or encouraging girls in Africa to remain in school, they've reached for popular songs and comic books to deliver the message, hoping to achieve some kind of liftoff. Success has been real, but too often temporary or sporadic. Would a richer understanding of fads have helped them create better ones?
In trying to understand these phenomena, writers have been engaged in a conversation that has spanned more than a hundred years. In 1895 Gustave LeBon's speculations on "The Crowd" contained some cockeyed notions, and some that are still in use today. Ludwik Fleck, writing on "The Evolution of a Scientific Fact" in the thirties, in part inspired Thomas Kuhn's writings on the structure of scientific revolutions in the sixties. Everrett Rogers's books on the "Diffusion of Innovations" led to hundreds of other books on the subject and made terms like early adopters and agents of change part of the language. For several decades positive social change has been attempted through a practice called Social Marketing, derived in part from advertising techniques. Diffusion and social marketing models have been used extensively in philanthropy, often with success. But to my knowledge these techniques have not yet led to a description of the fad that's detailed and testable.
Malcom Gladwell was stimulating in identifying elements of the fad in The Tipping Point but we are still left with a recipe that calls for a pinch of this and a bit, but not too much, of that.
Richard Dawkins made a dazzling frontal assault on the question when he introduced the idea of memes in The Selfish Gene. The few pages he devoted to the idea have inspired a number of books and articles in which the meme is considered to be a basic building block of social change, including fads. But as far as I can tell, the meme is still a fascinating idea that urges us toward experiments that are yet to be done.
Whether memes or some other formulation turns out to be the engine of fads, the process seems to go like this: a signal of some kind produces a response that in turn acts as a signal to the next person, with the human propensity for imitation possibly playing a role. This process of signal-response-signal might then spread with growing momentum, looking something like biological contagion. But other factors may also apply, as in Steve Strogatz's examination of how things sync up with one another. Or Duncan Watt's exploration of how networks of all kinds follow certain rules of efficiency. Or the way crowds panic in a football stadium or a riot. Or possibly even the studies on the way traffic flows, including the backward generated waves that cause mysterious jams. The patterns of propagation may turn out to be more interesting than anything else.
Fads and fashions have not been taken very seriously, I think, for at least three reasons. They seem short-lived, they're often silly and they seem like a break with normal, rational behavior. But as for being short-lived, the history of fads gives plenty of examples of fads that died out only to come back again and again, eventually becoming customary, including the use of coffee, tomatoes and hot chocolate. As for silliness, some fashions are not as silly as they seem. Fashions having to do with the length of one's hair seem trivial; yet political and religious movements have often relied on the prominence or absence of hair as a rallying symbol. And fads are far from aberrational. There are probably very few people alive who, at any one time, are not under the sway of a fad or fashion, if not dozens of them. And this is not necessarily a vacation from rational behavior on our part. On the contrary, it might be essential to the way we maximize the effectiveness of our choices. Two economists in California have developed a mathematical model suggesting that in following the lead of others we may be making use of other people's experience in a way that gives us a slightly higher chance of success in adopting a new product. The economists say this may explain a burst of popularity in a new product and possibly throw light on fads themselves.
But another reason fads may not have been examined in more detail, and this could be the killer, is that at least for the moment they just seem too complicated. Trying to figure out how to track and explain change is one of the oldest and toughest of questions. Explaining change among people in groups is perhaps complex beyond measure, and may turn out to be undoable. It may forever be an art and not a science. But still, the humble fad is too tantalizing to ignore.
We take it for granted and dismiss it, even while we're in the rapture of it. This commonplace thing that sits there like the purloined letter may or may not turn out to contain a valuable message for us, but it is staring us in the face.
Alan Alda, an actor, writer and director, is currently playing Richard Feynman in the stage play QED at Lincoln Center in New York.
comes after Science? When?"
The paradigm of Question/Answer doesn't really work in my world as I've never really found Life, The Universe, and Everything (LU&E) and most (but not all) of its constituent parts and systems to be fundamentally amenable to it. From my research, I've come to a general conclusion that LU&E and most of its parts are fundamentally not knowable, or even humanly understandable in any linguistic or mathematical sense, except when framed in a more narrow set of terms, like "metaphor" or "pretend" or "just so".
A dear friend of mine once noted: "Nobody knows and you can't find out" and I largely agree with him. However, I can also say that, like being in the presence of a bucket of bricks, this is all more an experiential thing, more like a synchronistic aesthetic moment and less like a diachronistic or ahistorically definitive mathematical proposition or linguistically intelligible conclusion. So, one can't "know" it, nor can one "find out", but one can come to a sensibility that is convincing at the time and creatively informs one's behaviour and choices.
Hence, the only justice in this life is poetic, and everything else is just some tweaky form of petty revenge or (more typically in this life of entertainment and cultural anaesthesiology) dodging bullets while one waits for the big storm to blow over.
It can be infuriating (to me and most everyone else, it seems) when my work or research comes such conclusions, but since when has there been some big carved-in-stone guarantee that it's supposed to make sense in the first place? Isn't a rational conclusion a bit presumptuous and arrogant? From what I can gather it seems that the complete object of study fundamentally doesn't and shouldn't make sense (as sense seems to be a tiny subset surrounded by a vast multitude of complex forms of "nonsense"), and see that not as a shortcoming on the part of the Universe, as much as it is an indication of the limitations of human reason and the short time we get to spend on this planet.
But all this is probably not what you wanted to hear, so here's a good question that's been bugging me for years and if anyone wants to submit an answer, let me know - I'm all ears...
Mister Warwick asks:
"What comes after Science? When?"
Henry Warwick is an artist, composer, and scientist.
This is, I believe, the key question on which the quantum theory of gravity and our understanding of cosmology, depends. We have made tremendous progress in the last years towards each goal, and we come to the point where we need a new answer to this question to proceed further. The basic reason for this problem is that most notions of time, change and dynamics which physics, and science more generally, have used are background dependent. This means that they define time and change in terms of fixed points of reference which are outside the system under study and do not themselves change or evolve. These external points of reference include usually the observer and clocks used to measure time. They constitute a fixed background against which time and change are defined. Other aspects of nature usually assumed to be part of the background are the properties of space, such as its dimensionality and geometry.
General relativity taught us that time and space are parts of the dynamical system of the world, that do themselves change and evolve in time. Furthermore, in cosmology we are interested in the study of a system that by definition contains everything that exists, including all possible observers. However, in quantum theory, observers seem to play a special role, which only makes sense if they are outside the system. Thus, to discover the right quantum theory of gravity and cosmology we must find a new way to formulate quantum theory, as well as the notions of time and change, to apply to a system with no fixed background, which contains all its possible observers. Such a theory is called background independent.
The transition from background dependent theories to background independent ones is a basic theme of contemporary science. Related to it is the change from describing things in terms of absolute properties intrinsic to a given elementary particle, to describing things in terms of relational properties, which define and describe any part of the universe only through its relationships to the rest.
In loop quantum gravity we have succeeded in constructing a background independent quantum theory of space and time. But we have not yet understood completely how to put the observer inside the universe. String theory, while it solves some problems, has not helped here, as it is so far a purely background dependent theory. Indeed string theory is unable to describe closed universes with a positive cosmological constant, such as observations now favor.
Among the ideas which are now in play which address this issue are Julian Barbour's proposal that time does not exist, Fotini Markopoulou's proposal to replace the single quantum theory relevant for observing a system from the outside with a whole family of quantum theories, each a description of what an observer might see from a particular event in the history of the universe and 't Hooft's and Susskind's holographic principle. This last idea says that physics cannot describe precisely what is happening inside a region of space, instead we can only talk about information passing through the boundary of the region. I believe these are relevant, but none go far enough and that we need a radical reformulation of our ideas of time and change.
As the philosopher Peirce said over a century ago, it is fundamentally irrational to believe in laws of nature that are absolute and unchanging, and have themselves no origin or explanation. This is an even more pressing issue now, because we have strong evidence that the universe, or at least the part in which we live, came into existence just a few billion years ago. Were the laws of nature waiting around eternally for a universe to be created to which they could apply? To resolve this problem we need an evolutionary notion of law itself, where the laws themselves evolve as the universe does. This was the motivation for the cosmological natural selection idea that Martin Rees is so kind to mention. That is, as Peirce understood, the notions of evolution and self-organization must apply not just to living things in the universe, but the structure of the universe and the laws themselves.
Lee Smolin, a theoretical physicist, is a founding member and research physicist at the Perimeter Institute in Waterloo Canada author of Three Roads to Quantum Gravity.
Three decades ago I began my first career working on a British television series called "Survival". Unlike the current "Survivor" series (about the politics of rejection while camping out) these were natural history documentaries on a par with the best of National Geographic and Sir David Attenborough: early recordings of humpback whales, insights on elephant behavior, the diminishing habitats of mountain gorillas and orangutans, a sweeping essay on the wildebeest migration, and my favorite, an innovative look at the ancient baobab tree.
In 2001 the "Survival" series died. It was a year when conservation efforts lagged across the board, along with other failures to take the long view. Survival programs may have told people what they could no longer bear to hear (that the human species is soiling its own den) without demonstrating constructive solutions. For example, there are precious few incentives to develop alternate energy sources despite the profound vulnerabilities that our dependence on foreign energy revealed yet again. We have no "Vision Thing," despite the many clues. "It's global warming, dude," a 28 year-old auto mechanic told The New York Times as he fished in the Hudson River; "I don't care if the whole planet burns up in a hundred years. If I can get me a fish today, it's cool by me."
Happily this provides a continuum to the question I posed at this forum in 1998:
What I was reaching for with that third person perspective was a selfless overview. What I've since found is that healing dances of Native Americans and some African peoples follow the saga of a hero or heroine, much the way you or I listen to Bob Dylan or Bonnie Raitt and identify with their lyrics.
While Carl Jung delved into the healing ritual archetype among many cultures, a new science called Biomusicology suggests even more ancient origins, tracing the inspiration for human music to natural sounds (the rhythm of waves lapping at the shore, rain and waterfalls, bird song, breathing, and our mother's heartbeat when we were floating in the womb.) Songs of birds certainly influenced classical music, and the call and response patterns of birds were imitated in congregations and cotton fields, with shouts, which led to the Delta blues.
The salubrious influence of music, including research by Oliver Sacks, is featured in a Discovery Channel program that I helped research. "The Power of Music" will be broadcast in 2002, as will Sir David Attenborough's new series on a similar theme, "Songs of the Earth." But will these programs inspire viewers to relinquish their SUVs for a hydrogen-powered car? How does one convince people to address global warming when most minds are focused on the economy or terrorism?
Part one of this answer must include "An Ounce of Prevention." Richard A. Clarke, former White House director of counterterrorism, explained our ill preparedness for September 11 this way: "Democracies don't prepare well for things that have never happened before." Another senior analyst said. "Unfortunately, it takes a dramatic event to focus the government's and public's attention." Finally, efforts to prevent hijackings have been responsive, rarely proactive.
we devise our New Year's Resolutions, how many of us will wait for
a scare (positive diagnosis) before we quit smoking, drinking or
sitting on our duff? Year 2002 should be the time when conservationists
not only demand action, but persuade people everywhere that the
demise of wild places can and should be stopped, that some of our
forces of habit (unneeded air conditioning, for example) will eventually
affect our quality of life in ways of greater devastation. We need
people to identify with the song lyrics of others, who may live
in distant lands, and feel the brunt of global warming long before
we do. But first we must learn to understand their language.
Delta Willis has searched for fossils alongside Meave and Richard Leakey, profiled physicists and paleontologists who draw inspiration from nature, and serves as chief contributor to the Fodor's Guide to Kenya & Tanzania.
Why do we ask Edge questions?
Why do we ask Edge questions that challenge the "anesthesiology" of accepted wisdom and so the traditional answers we are given as to who and what we are? In most societies, accepted wisdom is to be respected not questioned, and who and what we are have long been decided by custom, elders, social betters and the sacred word of God. Moreover, why is it that the asking of Edge questions has only thrived and been encouraged in Western societies (with the help of such individuals as Socrates and the contributors to this Edge project)?
Children it should be noted readily ask Edge-type questions. The problem is that they stop when they become adults except in the civilization (with a few ups and downs) that started in Classical Greece Western civilization.
"Are all our beliefs in gods, a myth, a lie foolishly cherished, while blind hazard rules the world?" That perhaps is the first Edge question (Euripides, Hecabe, lines 490-491) and importantly a question not raised safely in private but before a large audience. Indeed, Euripides raised it to gain public reward. Greek playwrights wrote plays for competitions that were judged by ten randomly selected members of the audience and given Euripides wanted to win he must have believed that the average Greek would be hearing this Edge question raised about the Gods.
The public exploring of Edge questions is rare outside Western societies. Instead, "what was finally persuasive was appeal to established authority", and that, "the authority of tradition came to have more convincing effect than even direct observation and personal experience" (Robert Oliver, Communication And Culture In Ancient India And China, 1971). And as the Japanese scholar Hajime Nakamura noted, the Chinese "insisted that the traditional sacred books are more authoritative than knowledge based upon sense and inference" (Ways Of Thinking Of Eastern Peoples, 1964). Job might seem to be asking the Edge question "Why do the just suffer and the wicked flourish?" But the story of Job is not about rewarding Edge questioning but faith in the wisdom of God: "Who is this that darkens my counsel with words without knowledge".
This Edge question might be criticized as Eurocentric. But it was Western intellectuals that first asked the Edge question about whether ones own culture might be privileged falsely over others and so invented the idea of ethnocentricity.
So my Edge question is this: why is it only amongst adults in the Western world that has tradition been so insistently and constantly challenged by the raising of Edge questions?
John R. Skoyles is a researcher in the evolution of human intelligence in the light of recent discoveries about the brain, who, while a first-year student at LSE, published a theory of the origins of Western Civilization in Nature.
Response to John McCarthy:
John McCarthy asks how animal behavior is encoded in DNA. May I sharpen the question? One of the most remarkable manifestations of inherited behavior is the way birds navigate accurately whilst migrating over vast distances. I understand that part of this skill lies with the bird's ability to use the positions of stars as beacons. Does this imply that some avian DNA contains a map of the sky? Could a scientist in principle sequence the DNA and reconstruct the constellations?
Response to Martin Rees's response to my question:
Sir Martin Rees has eloquently outlined the key issues concerning the status of multiverse theories. I should like to make a brief response followed by a suggestion for further research.
Sir Martin raises the question of whether what we consider to be fundamental laws of physics are in fact merely local bylaws applicable to the universe we perceive. Implicit in this assumption is the fact that there are laws of some sort anyway. By definition, a law is a property of nature that is independent of time. We still need to explain why universes come with such time-independent lawlike features, even if a vast and random variety of laws is on offer. One might try to counter this by invoking an extreme version of the anthropic theory in which there are no laws, just chaos. The apparent day-by-day lawfulness of the universe would then itself be anthropically selected: if a crucial regularity of nature suddenly failed, observers would die and cease to observe. But this theory seems to be rather easily falsified.
As Sir Martin points out, if a particular remarkable aspect of the laws is anthropically selected from a truly random set, then we would expect on statistical grounds the aspect concerned to be just sufficient to permit biological observers. Consider, then, the law of conservation of electric charge. At the atomic level, this law is implied by the assumed constancy of the fine-structure constant. (I shall sidestep recent claims that this number might vary over cosmological time scales.) Suppose there were no such fundamental law, and the unit of electric charge varied randomly from moment to moment? Would that be life-threatening? Not if the variations were small enough. The fine-structure constant affects atomic fine-structure, not gross structure, so that most chemical properties on which life as we know it depends are not very sensitive to the actual value of this number.
In fact, the fine-structure constant is known to be constant to better than one part in a hundred million. A related quantity, the anamolous magnetic moment of the electron, is known to be constant to even greater accuracy. Variations several orders of magnitude larger than this would not render the universe hostile to carbon-based life. So the constancy of electric charge at the atomic level is an example of a regularity of nature far in excess of what is demanded by anthropic considerations. Even a multiverse theory that treated this regularity as a bylaw would need to explain why such a bylaw exists.
I now turn to my meta-question of whether the multiverse might be no better than theism in modern scientific language. It is possible that this claim can be tested using a branch of mathematics known as algorithmic information theory, developed by Kolmogorov and Chaitin. This formalism offers a means to quantify Occam's Razor, by quantifying the complexity of explanations. (Occam's Razor suggests that, all else being equal, we should prefer the simplest explanation of the facts.)
the question of how to explain certain fine-tuned bio-friendly aspects
of the universe, the crude response "God made it that way" is infinitely
complex (and therefore very unsatisfying), because God might have
made one of an infinite number of alternative universes. Put differently,
the selection set the "shopping list" of universes available
to an omnipotent Deity contains an infinite amount of information,
so the act of selection from this set involves discarding this infinite
quantity of information. In the same way, the multiverse contains
an infinite amount of information. In this case we observers are
the selectors, but we still discard an infinite quantity of information
by failing to observe the other universes. A proper mathematical
parameterization of various multiverse theories and various theological
models should enable this comparison to be made precise.
I argued in my book The Mind of God that most attempts at ultimate explanations run into this "tower of turtles" problem: one has to start somewhere in the chain of reasoning, with a certain unproved given, be it God, mathematics, a physical principle, revelation, or something else. That is because of an implied dualism common to scientific and theistic explanations alike. In science the dualism is between states of the world and abstract laws. In theism it is between creature (i.e. the physical universe) and Creator.
But is this too simplistic? Might the physical world and its explanation be ultimately indecomposable? Should we consider alternative modes of description than one based on linear reasoning from an unproved given, which after all amounts to invoking a magical levitating superturtle at the base of the tower? That is what I meant by the "Third Way" in my original question.
Could our lack of theoretical insight in some of the most basic questions in biology in general, and consciousness in particular, be related to us having missed a third aspect of reality, which upon discovery will be seen to always have been there, equally ordinary as space and time, but so far somehow overlooked in scientific descriptions?
Is the arena of physics, constructed out of space and time with matter/energy tightly interwoven with space and time, sufficient to fully describe all of our material world? The most fundamental debates in cognitive science take a firm "yes" for granted. The question of the nature of mind then leaves open only two options: either a form of reductionism, or a form of escapism. The latter option, a dualist belief in a separate immaterial mental realm has fallen out of favor, largely because of the astounding successes of natural science. The former, reductionism, is all that is left, whether it is presented in a crude form (denial of consciousness as real or important) or in a more fancy form (using terms like emergence, as if that would have any additional explanatory power).
The question I ask myself is whether there could not be another equally fundamental aspect to reality, on a par with space and time, and just as much part of the material world?
Imagine that some tribe had no clear concept of time. Thinking only in terms of space, they would have a neat way to locate everything in space, and they would scoff at superstitious notions that somehow there would be "something else", wholly other than space and the material objects contained therein. Of course they would see things change, but both during and after each change everything has its location, and the change would be interpreted as a series of purely spatial configurations.
Yet such a geometric view of the world is not very practical. In physics and in daily life we use time in an equally fundamental way as space. Even though everything is already "filled up" with space, similarly everything participates in time. Trying to explain that to the people of the no-time tribe may be difficult. They will see the attempt at introducing time as trying to sneak in a second type of space, perhaps a spooky, ethereal space, more refined in some way, imbued with different powers and possibilities, but still as a geometric something, since it is in these terms that they are trained to think. And they probably would see no need for such a parallel pseudo-space.
In contrast, we do not consider time to be in any way less "physical" than space. Neither time nor space can be measured as such, but only through what they make possible: distances, durations, motion. While space and time are in some sense abstractions, and not perceivable as such, they are enormously helpful concepts in ordering everything that is perceivable into a coherent picture. Perhaps our problems in coming up with a coherent picture of mental phenomena tells us that we need another abstraction, another condition of possibility for phenomena in this world, this very material world we have always lived in.
Could it be that we are like that tribe of geometers, and that we have so far overlooked a third aspect of reality, even though it may be staring us in the face? Greek mathematicians used time to make their mathematical drawings and construct their theories, yet they disregarded time as non essential in favor of a Platonic view of unchanging eternal truths. It took two thousand years until Newton and Leibniz invented infinitesimal calculus, which opened the door for time to finally enter mathematics, thus making mathematical physics possible.
To reframe my question: could our lack of theoretical insight in some of the most basic questions in biology in general, and consciousness in particular, be related to us having missed a third aspect of reality, which upon discovery will be seen to always have been there, equally ordinary as space and time, but so far somehow overlooked in scientific descriptions?
Although I don't know the answer, I suspect we will stumble upon it through a trigger that will come from engineering. Newton did not work in a vacuum. He built upon what Galileo, Descartes, Huygens and others had discovered before him, and many of those earlier investigations were triggered by concrete applications, in particular the construction of powerful canons calling for better ways to compute ballistic orbits. Another example is the invention of thermodynamics. It took almost two centuries for Newtonian mechanics to come to grips with time irreversibility. Of course, every physicist had seen how stirring sugar in a cup of tea is not reversible, but until thermodynamics and statistical mechanics came along, that aspect of reality had mostly been ignored. The engineering problems posed by the invention of steam engines were what forced a deeper thinking about time reversibility.
Perhaps current engineering challenges, from quantum computers to robotics to attempts to simulate large-scale neural interactions, will trigger a fresh way of looking at the arena of space and time, perchance finding that we have been overlooking an aspect of material reality that has been quietly with us all along.
Piet Hut, professor of astrophysics at the Institute for Advanced Study, in Princeton, is involved in the project of building GRAPEs, the world's fastest special-purpose computers.
As I prepare to head for Cambridge (the Brits' one) for the conference to mark Stephen Hawking's 60th birthday, I know that the suggestion I am just about to make will strike the great and the good who are assembling for the event as my scientific suicide note. Suggesting time does not exist is not half as dangerous for one's reputation as questioning the expansion of the universe. That is currently believed as firmly as terrestrial immobility in the happy pre-Copernican days. Yet the idea that the universe in its totality is expanding is odd to say the least. Surely things like size are relative? With respect to what can one say the universe expands?
When I put this question to the truly great astrophysicists of our day like Martin Rees, the kind of answer I get is that what is actually happening is that the intergalactic separations are increasing compared with the atomic scales. That's relative, so everything is fine. Some theoreticians give a quite different answer and refer to the famous failed attempt of Hermann Weyl in 1917 to create a genuinely scale-invariant theory of gravity and unify it with electromagnetism at the same time. That theory, beautiful though it was, never made it out of its cot. Einstein destroyed it before it was even published with the simple remark that Weyl's theory would make the spectral lines emitted by atoms depend on their prior histories, in flagrant contradiction to observation. Polite in public, Einstein privately called Weyl's theory 'geistreicher Unfug' [inspired nonsense].
Ever since that time it seems to have been agreed that, for some inscrutable reason, the quantum mechanics of atoms and elementary particles puts an absolute scale into physics. Towards the end of his life, still smarting from Einstein's rap, Weyl wrote ruefully "the facts of atomism teach us that length is not relative but absolute" and went one to bury his own cherished ambition with the words "physics can never be reduced to geometry as Descartes had hoped".
I am not sure the Cartesian dream is dead even though the current observational evidence for expansion from a Big Bang is rather impressive. The argument from quantum mechanics, which leads to the identification of the famous Planck length as an absolute unit, seems to me inconclusive. It must be premature to attempt definitive statements in the present absence of a theory of quantum gravity or quantum cosmology. And the argument about the relativity of scale being reflected in the changing ratio of the atomic dimensions to the Hubble scale is vulnerable.
To argue this last point is the purpose of my contribution, which I shall do by a much simpler example, for which, however, the principle is just the same. Consider N point particles in Euclidean space. If N is greater than three, the standard Newtonian description of this system is based on 3N + 1 numbers. The 3N (=3xN) are used to locate the particles in space, and the extra 1 is the time. For an isolated dynamical system, such as we might reasonably conjecture the universe to be, three of the numbers are actually superfluous. This is because no meaning attaches to the three coordinates that specify the position of the centre of mass. This is a consequence of the relativity principle attributed to Galileo, although it was actually first cleanly formulated by Christiaan Huygens (and then, of course, brilliantly generalized by Einstein). The remaining 3N - 2 numbers constitute an oddly heterogeneous lot. One is the time, three describe orientation in space (but how can the complete universe have an orientation?), one describes the overall scale, and the remaining 3N - 7 describe the intrinsic shape of the system. The only numbers that are not suspect are the last: the shape variables.
Developing further ideas first put forward in 1902 in his Science and Hypothesis by the great French mathematician Poincare [ascii does not allow me to put the accent on his e], I have been advocating for a while a dynamics of pure shape. The idea is that the instantaneous intrinsic shape of the universe and the sense in which it is changing should be enough to specify a dynamical history of the universe. Let me spell this out for the celebrated 3 body problem of Newtonian celestial mechanics. In each instant, the instantaneous triangle that they form has a shape that can be specified by two angles, i.e., just two numbers. These numbers are coordinates on the space of possible shapes of the system. By the 'sense' in which the shape is changing I mean the direction of change of the shape in this two-dimensional shape space. That needs only one number to specify it. So a dynamics of pure shape, one that satisfies what I call the Poincare criterion, should need only three essential numbers to set up initial conditions. That's the only ideal that, in Poincare's words, would give the mind satisfaction. It's the ideal that inspired Weyl (though he attacked the problem rather differently).
Now how does Newtonian dynamics fare in the light of the Poincare criterion? Oddly enough, despite centuries of dynamical studies, this question hardly seems to have been addressed by anyone. However, during the last year, working with some N-body specialists, I have established that Newtonian mechanics falls short of the ideal of a dynamics of pure shape by no fewer than five numbers. Seen from the rational perspective of shape, Newtonian dynamics is very complicated. This is why the study of the Moon (which forms part of the archetypal Earth-Moon-Sun three-body problem) gave Newton headaches. Among the five trouble makers (which I won't list in full or discuss here), the most obstreperous is the one that determines the scale or size. The same five trouble makers are present for all systems of N point particles for N equal to or greater than 3. Incidentally, the reason why 3-body dynamics is so utterly different from 2-body dynamics is that shape only enters the picture when N = 3. Most theoretical physicists get their intuition for dynamics from the study of Newtonian 2-body dynamics (the Kepler problem). It's a poor guide to the real world.
The point of adding up the number of the variables that count in the initial value problem is this. The Newtonian three-body problem can be expressed perfectly well in terms of ratios. One can consider how the ratios of the individual sides to the perimeter of the triangle change during the evolution. This is analogous to following the evolution of the ratio of the atomic-radii to the Hubble radius in cosmology. To see if scale truly plays no role, one must go further. One must ask: do the observable ratios change in the simplest way possible as dictated by a dynamics of pure shape, or is the evolution more complicated? That is the acid test. If it is failed, absolute scale is playing its pernicious role. The Poincare criterion is an infallible test of purity.
Both Newtonian dynamics and Einstein's general relativity fail it. The fault is not in quantum mechanics but in the most basic structure of both theories. Scale counts. In fact, seen from this dynamical perspective Einstein's theory is truly odd. As James York, one of John Wheeler's students in Princeton, showed 30 years ago (in a beautiful piece of work that I regard as the highest point achieved to date in dynamical studies), the most illuminating way to characterize Einstein's theory is that it describes the mutual interaction of infinitely many degrees of freedom representing the pure shape of the universe with one single solitary extra variable that describes the instantaneous size of the universe (i.e., its 3-dimensional volume in the case of a closed universe). From Poincare's perspective, this extra variable, to put frankly, stinks, but the whole of modern cosmology hangs on it: it is used to explain the Hubble red shift.
I have stuck my neck out in good Popperian fashion. Current observations
suggest I will have my head chopped off and Einstein will be vindicated.
Certainly all the part of his theory to do with pure shape is philosophically
highly pleasing and is supported by wonderful data. But even if
true dynamical expansion is the correct explanation of the Hubble
red shift, why did nature do something so unaesthetic? As I hope
to show very shortly on the Los Alamos bulletin board, dynamics
of pure shape can mimic a true Hubble expansion. The fact is that
Einstein's theory allows red shifts of two kinds: one is due to
stretching (expansion) of space, while the other is the famous gravitational
red shift that makes clocks on the Earth run at a now observable
amount slower than clocks in satellites. It is possible to eliminate
scale from Einstein's theory, as Niall O'Murchadha and I have shown.
This kills the stretching red shift but leaves the other intact.
It is just possible that this could explain the Hubble red shift.
my challenge to the theoreticians is this: Are you absolutely sure
Einstein got it exactly right? Prove me wrong in my hunch that the
universe obeys a dynamics of pure shape subtly different from Einstein's
theory. If size does count, why should nature do something so puzzling
to the rational mind?
will we emerge from the quantum tunnel of obscurity?"
Antony Valentini is a theoretical physicist at Imperial College in London.
When you open your eyes in the morning, you usually see what you expect to see. Often it will be your bedroom, with things where you left them before you went to sleep. What if you opened your eyes and found yourself in a steaming tropical jungle? or a dark cold dungeon? What a shock that would be! Why do we have expectations about what is about to happen to us? Why do we get surprised when something unexpected happens to us? More generally, why are we Intentional Beings who are always projecting our expectations into the future? How does having such expectations help us to fantasize and plan events that have not yet occurred? How do they help us to pay attention to events that are really important to us, and spare us from being overwhelmed by the blooming buzzing confusion of daily life? Without this ability, all creative thought would be impossible, and we could not imagine different possible futures for ourselves, or our hopes and fears for them. What is the difference between having a fantasy and experiencing what is really there? What is the difference between illusion and reality? What goes wrong when we lose control over our fantasies and hallucinate objects and events that are not really there? Given that vivid hallucinations are possible, especially in mental disorders like schizophrenia, how can we ever be sure that an experience is really happening and is not just a particularly vivid hallucination? If there a fundamental difference between reality, fantasy, and illusion, then what is it?
models of how the brain controls behavior have begun to clarify
how the mechanisms that enable us to learn quickly about a changing
world throughout life also embody properties of expectation, intention,
attention, illusion, fantasy, hallucination, and even consciousness.
I never thought that during my own life such models would develop
to the point that the dynamics of identified nerve cells in known
anatomies could be quantitatively simulated, along with the behaviors
that they control. During the last five years, ever-more precise
models of such brain processes have been discovered, including detailed
answers to why the cerebral cortex, which is the seat of all our
higher intelligence, is organized into layers of cells that interact
with each other in characteristic ways.
Although an enormous amount of work still remains to be done before such insights are fully developed, tested, and accepted, the outlines already seem clear of an emerging theory of biological intelligence, and with it, the scaffold for a more humane form of artificial intelligence. Getting a better understanding of how our minds learn about a changing world, and of how to embody their best features in more intelligent technologies, should ultimately have a transforming effect on many aspects of human civilization.
Stephen Grossberg is a Professor of Cognitive and Neural Systems, Mathematics, Psychology, and Engineering at Boston University.
will computation and communication change our everyday lives, again?"
Rodney Brooks is Director of the MIT Artificial Intelligence Laboratory, and Fujitsu Professor of Computer Science. He is also Chairman and Chief Technical Officer of IS Robotics,
an extra-terrestrial civilization develop the same mathematics as
ours? If not, how could theirs possibly be different?"
Karl Sabbagh is a writer and television producer and author of A Rum Affair: A True Story of Botanical Fraud.
A mountain of research shows that our fears modestly correlate with reality. With images of September 11th lingering in their mind's eye, many people dread flying to Florida for Spring break, but will instead drive there with confidence though, mile per mile, driving during the last half of the 1990s was 37 times more dangerous than flying.
Will yesterday's safety statistics predict the future? Even if not, terrorists could have taken down 50 more planes with 60 passengers each and if we'd kept flying we'd still have been ended last year safer on commercial flights than on the road. Flying may be scary, but driving the same distance should be many times scarier.
Our perilous intuitions about risks lead us to spend in ways that value some lives hundreds of times more than other lives. We'll now spend tens of billions to calm our fears about flying, while subsidizing tobacco, which claims more than 400,000 lives a year.
It's perfectly normal to fear purposeful violence from those who hate us. But with our emotions now calming a bit, perhaps it's time to check our fears against facts. To be prudent is to be mindful of the realities of how humans suffer and die.
the laws of nature a form of computer code that needs and uses error
In mid-November 1999, New Yorker writer Rebecca Mead published a commentary on the candidacy of Al Gore, and in it she gave us a new word. In the old days, candidates were advised in a pseudo-Freudian frame. Clinton, in pre-Monica times, was told to emphasize his role as "strong, assertive, and a good father." Now, however, this psychobabble has been eclipsed by what she called biobabble and Mead recommended that Gore's advice might best be based on evolutionary psychology instead of Freud. In other words, it wasn't your parents who screwed you up, it was the ancient environment. Mead cites Sarah Hrdy, a primatologist, as suggesting that the ideal presidential leader would be a grandma whose grandchildren were taken away and scattered across the country in secret locations. Then the president could be expected to act on the behalf of the general good, to maximize her reproductive fitness. No wonder Gore wasn't appointed.
This is déjà vu all over again, and after the last century of biopolicy in action, can we still afford to be here? Somehow we can't get away from a fixation on the link between biology and behavior. A causal relationship was long championed by the Mendelian Darwinians of the Western World, as breeding and sterilization programs to get rid of the genes for mental deficiencies became programs to get rid of the genes for all sorts of undesirable social behaviors, and then programs to get rid of the undesirable races with the imagined objectionable social behaviors. Science finally stepped back from the abyss of human tragedy that inevitably ensued, and one result was to break this link by questioning whether human races are valid biological entities. By now, generations of biological anthropologists have denied the biology of race. Arguing that human races are socially constructed categories and not biologically defined ones, biological anthropologists have been teaching that if we must make categories for people, "ethnic group" should replace "race" in describing them.
The public has been listening. This is how the U.S. census came to combine categories that Americans base on skin color "African-American," delineated by "one drop of blood" with categories based on language "Latino." However ethnic groups revitalize the behavioral issue because ethnicity and behavior are indeed related, although not by biology, but by culture. This relationship is implicitly accepted as the grounds for the profiling we have heard so much about of late, but here is the rub. Profiling has accomplished more than just making it easier to predict behaviors, actually revitalizing the issue of biology and behavior by bringing back "race" as a substitution for "ethnic group." This might well have been an unintended consequence of using "race" and "ethnic group" interchangeably, because this usage forged a replacement link between human biology and human culture. Yet however it happened, we are back where we started, toying with the notion that human groups defined by their biology differ in their behavior.
And so, how do we get out of this? Can we? Or does the programming that comes shrink-wrapped with our state-of-the-art hardware continue to return our thinking to this point because of some past adaptive advantage it brought? It doesn't seem very advantageous right now.
Milford H. Wolpoff is Professor of Anthropology at the University of Michigan and author (with Rachel Caspari) of Race and Human Evolution: A Fatal Attraction.
different could life have been?"
are moral assertions connected with the world of facts?"
There are an increasing number of books coming out propounding the notion that beauty is real and crosses all sorts of cultural and historic lines. In their view, that which unites us as a species in the perception of beauty is way larger than what divides us.
My big question is whether, in a disjointed world in which the search for meaning is becoming ever more important, the existence of widely agreed upon ideas of beauty will increasingly become a quick and useful horseback way of determing whether or not *any* complex system, human or technological, is coherent.
This idea draws in part from pre-industrial age definitions of beauty that held that "Beauty is truth, truth beauty that is all ye know on earth, and all ye need to know" (Keats, 1820), and most important, "The most general definition of beauty....multeity in unity" (Coleridge, 1814).
Interestingly enough, the idea that I view as increasingly dumb, "Beauty is in the eye of the beholder" Bartlett's dates only to 1878, which is about when the trouble started, in my view.
Joel Garreau is the cultural revolution correspondent of The Washington Post and author of Edge City.
If they do exist, they could lead to interstellar travel--indeed, to instantaneous access to points at the far range of the universe. They would also confirm both general relativity and the discovery of exotic matter. But curiously little thought seems given to detecting wormholes, or theorizing about how small, stable ones might have evolved since the early universe. Several co-authors and I proposed using the Massive Compact Halo Object (MACHO) searches to reveal a special class--"negative mass" wormholes--since they would appear as sharp, two-peaked optical features, due to gravitational lensing (Physical Review D 51, p3117-20, 1995) So far all the two peaked cases found have been attributed to binary stars or companion planets, though the data fits are not very close.
Surely there could be other ways to see such exotic objects. Some thought and calculations about wormhole evolution might produce a checkable prediction, as a sidelight to an existing search. Further thought is needed about the implications that extra dimensions from string theory will have on wormholes. It seems theoretically plausible that the inflationary phase of the early universe might have made negative mass string loops framing stable Visser-type wormholes.
Perhaps wormholes do not exist. A plausible search that yielded nothing would still be a result, because we could learn something about the possibility of exotic matter. A positive result, especially detection of a wormhole we could reach with spacecraft, could change human history.
Gregory Benford is a professor of physics and astronomy at the University of California, Irvine. His most recent nonfiction is Deep Time.
Surely, the right question it is not what was wrong before Sept.11th. The question mark to be unravelled is why on earth the western productive system has become all-dominant in the general pool of genes, or memes.
The unsolved question is what makes that system so efficient, so all-embracing that no other system or ideology can compete in this planet's race for improving the economic well being per capita. It must be infuriating for beleivers of so called alternative ways, to deal with poverty and collective happiness in this end of live, I mean, am not talking about afterwards or beyond.
We know a bit about the actual mechanisms of the system or rather, what economists call aggregate demand. We also infere some of the things which may influence the end product. But no attention is paid to the type of intelligence which is at the roots of the system's survival.
The answer might be that it is a self organizing system based on swarm intelligence. The nearest thing to that are construction setups and organization schemes by social insects like ants, bees and termites: A few, very simple rules, instead of preprogramming and centralized control; the right mixture of robustness and flexibility just like DNA hardly any supervising body at all.
Termites of the genus Macrotermes have the added advantage of responding, with due lags, to indirect stimulation from the environment, and not only from other workers. This kind of termites would quickly reduce by half the number of road accidents the opposite practice of hominids by diverting traffic towards the railways, just by looking at the death figures.
All this has to do with genetic knowledge. As to non-genetic factors, two are of paramount importance: the separation of State from Religion it was tantamount to a free entry ticket for everybody in the decision making process and the neat distinction between Theology and Philosophy (we call it now science); it opened the door to the technological revolution.
Eduardo Punset is Director and Producer of "Networks," a weekly programme of Spanish public television on Science and author of A Field Guide to Survive in the XXI st Century.
John McCarthy and I are from different generations (in the semester before McCarthy invented Lisp, he taught my dad FORTRAN, using punch cards on an old IBM) but our questions are nearly the same. McCarthy asks "how are behaviors encoded in DNA"?
Until recently, we were not in a position to answer this question. Few people would have even had the nerve to ask it. Many thought that most of the brain's basic organization arose in response to the environment. But we know that the mind of a newborn is far from a blank slate. As soon as they are born, babies can imitate facial gestures, connect what they hear with what they see, tell the difference between Dutch and Japanese, and distinguish between a picture of a scrambled face and a picture of a normal face. Nativists like Steven Pinker and Stanislas Dehaene suggest that infants are born with a language instinct and a "number sense". Since the function of our minds comes from the structure of our brains, these findings suggest that the microcircuitry of the brain is innate, largely wired up before birth. The plan for that wiring must come in part from the genes.
The DNA does not, however, provide a literal blueprint of a newborn's mind. We have only around 35,000 genes, but tens of billions of neurons. How does a relatively small set of genes combine to build a complex brain? As Richard Dawkins has put it, the DNA is more like a recipe than a blueprint. The genome doesn't provide a picture of a finished product, instead it provides a set of instructions for assembling an embryo. Those instructions govern basic developmental processes such as cell division and cell migration; it has long been known that such processes are essential to building bodies, and it now is becoming increasingly clear that the same processes shape our brains and minds as well.
There is, however, no master chef. In place of a central executive, the body relies on communication between cells, and communication between genes. Although the power of any one gene working on its own is small, the power of sets of genes working together is enormous. To take one example, Swiss biologist Walter Gehring has shown that the gene pax-6 controls eye development in a wide range of animals, from fruit flies to mice. Pax-6 is like any other gene in that it gives instructions for building one protein, but unlike the genes for building structural proteins like keratin and collagen because the protein that pax-6 builds serves as a signal to other genes, which in turn build proteins that serve as signals to still other genes. Pax-6 is thus a "master control gene" that launches an enormous cascade, a cascade of 2,500 genes working together to build an eye. Humans that lack it lack irises, flies that lack it lack eyes altogether. The cascade launched by pax-6 is so potent that when Gehring triggered it artificially on a fruit fly's antenna, the fly grew an extra eye, right there on its antenna. As scientists begin to work out the cascades of genes that build the brain, we will finally come to understand the role of the genes in shaping the mind.
Response to Paul Davies' reply to John McCarthy
It is hard indeed to imagine that nature would endow an organism with anything as detailed as The Cambridge Star Atlas. A typical bird probably has fewer than 50,000 genes, but, as Carl Sagan famously noted, there are billions and billions of stars.
Of course, you don't need to know all the stars to navigate. Every well trained sailor knows that Polaris marks North. A northern-hemisphere dwelling bird known as the Indigo Bunting knows something even more subtle - it doesn't just look for the brightest star (which could be lousy strategy on a cloudy night); instead it looks for how the stars rotate.
Cornell ecologist Stephen Emlen proved this experimentally, by raising buntings in a planetarium. One set of birds never got to see any stars, a second set saw the normal pattern of stars, and a third group saw a sneaky set of stars, in which everything rotated not around Polaris, but around Betelgeuse. The poor birds who didn't see any stars oriented themselves randomly (making it clear that they really did depend on the stars rather than a built-in compass). The birds who saw normal skies oriented themselves normally, and the ones who saw skies that rotated around Betelgeuse oriented themselves precisely as if they thought that Betelgeuse marked North. The birds weren't relying on specific sets of stars, they were relying on the stars' center of rotation.
You won't find the constellations in an indigo bunting's DNA, but you would find in their DNA the instructions for building a biological computer, one that can interpret the stars, taking the skies as its input and producing an estimated direction as its output. Just how the DNA can wire up such biological computers is my vote for the most important scientific question of the 21st century.
Gary F. Marcus is a cognitive scientist at New York University and author of The Algebraic Mind.
do we continue to act as if the universe were constructed from nouns
linked by verbs, when we know it is really constructed from verbs
linked by nouns?"
the universe a quantum computer?"
Seth Lloyd is an Associate Professor of Mechanical Engineering at MIT and a principal investigator at the Research Laboratory of Electronics.
wealth be distributed?"
John Markoff covers the computer industry and technology for The New York Times and is co-author of Takedown: The Pursuit and Capture of America's Most Wanted Computer Outlaw (with Tsutomu Shimomura).
God nothing more than a sufficiently advanced extra-terrestrial
I work on the question of evolution, not as it exists in Nature, but as a formal system which enables open-ended learning. Can we understand the process in enough detail to simulate the progress of biological complexity in pure software or electronics? A phenomena has appeared in many of my laboratory's experiments in learning across many different domains like game playing and robots. We have dubbed it a "Mediocre Stable State." It is an unexpected systematic equilibrium, where a collection of sub-optimal agents act together to prevent further progress. In dynamical systems, the MSS hides within cycles of forgetting that which has been already been learned.
When a MSS arises, instead of achieving creativity driven by merit based competition, progress is subverted through unspoken collusion. This occurs even in systems where agents cannot "think" but are selected by the invisible hand of a market. We know what collusion is: the two gas stations on opposite street corners fix their prices to divide the market. Hawks on both sides of a conflict work together to undermine progress towards peace. The union intimidates the pace-setter, lest he raise the work standards for everyone else. The telephone company undercapitalizes its own lucrative deployment of broadband, which might replace toll collection. Etc.
As a scientist with many interests in High Technology, of course I know there is progress. I am witness to new discoveries, new technologies, and the march of Moore's law. Clearly, the airplane, long distance communication, and the computer are revolutionarily progressive in amplifying human commerce, communication and even conflict. But these scientific and technological advances stand in stark contrast to the utter depressing lack of progress in human affairs.
Despite the generation of material wealth, health breakthroughs, and birth control methods which could end want and war, human social affairs are organized almost exactly the way they were 500 years ago. Human colonies seem like ant colonies and dog packs fixed by our genetic heritage, despite individual cognitive abilities. In fact, it is difficult to distinguish anymore between Dictatorships, Authoritarian Regimes, Monarchies, Theocracies, and Kleptocracies, or even one-party (or two party oscillatory) democracies. When labels are removed, it looks as if authority and power are still distributed in hierarchical oligarchies, arranged regionally. Stability of the oligarchic network is maintained by complex feedback loops involving wealth, loyalty, patronage, and control of the news.
Of course, I'm not against stability itself! But when patronage and loyalty (the collusion of the political system) are rewarded more than competitive merit and excellence, progress is subverted.
The 90's really felt like progress to me, especially with visible movement towards peace in certain regions of the world and an unparalleled creative burst in our industry. But now its like we've just been memory bombed back to the 1950's. The government is printing money and giving it to favored industries. We are fighting an invisible dehumanized enemy. War is reported as good for the economy. Loyalty to the fatherland must be demonstrated. One Phone Company to rule us all. An expensive arms race in space. And law breaking secret agents are the coolest characters on TV.
Havent we been here before? Haven't we learned anything?
Jordan B. Pollack is a computer science and complex systems professor at Brandeis University who works on AI, Artificial Life, Neural Networks, Evolution, Dynamical Systems, Games, Robotics, Machine Learning, and Educational Technology.
there be a science of human potential and the good life?"
My hunch is that there's not yet a science of human potential and the good life because such concerns are only just now moving from the realm of humanistic thinking to ones being informed by science. Much of my research lies at the interface between humanities and brain science, as my collaborators and I address basic issues regarding how enduring questions about the quality of human life can be informed by brain science.
In my primary research, I ask, what is the neural basis of human intelligence, and how can our understanding of brain development and plasticity be used to construct more effective learning environments? With Gabrielle Starr, an English professor at NYU and Anne Hamker here at Caltech, we are asking, what is the brain basis of aesthetic experience, and how can such an understanding be used to deepen our emotional life? With Michael Dobry, co-director of the graduate industrial design program at the Art Center College of Design, we are asking, what is the relation between design and the brain, and how can the design of daily life be more in line with the brain's capacities?
Ultimately, a science of human potential and of the good life must help explain how these human capacities can be actualized in contexts that confer significance and dignity to individual life.
Steven R. Quartz is Director of the Developmental Cognitive Neuroscience Laboratory at the California Institute of Technology.
is religion so important to most Americans and so trivial to most
David Gelernter is a professor of computer science at Yale, chief scientist at Mirror Worlds Technologies and author of Drawiing a Life: Surviving the Unabomber.
Psychiatrists know that some people have pathological forms of worry. There are names for this such as obsessive-compulsive disorder and generalized anxiety disorder; and treatments, such as psychotherapy and Prozac. But what about the rest of us? What is the optimal balance between worry and contentment? Should we all be offered some kind of training to help us achieve this optimal balance? And how should we apply our growing understanding of the brain mechanisms that control these feelings?
Samuel Barondes is a professor and director of the Center for Neurobiology and Psychiatry at the UC-San Francisco and author of Mood Genes: Hunting for Origins of Mania and Depression.
We know that genes play an important role in the shaping of our personality and intellects. Identical twins separated at birth (who share all their genes but not their environments) and tested as adults are strikingly similar-though far from identical-in their intellects and personalities. Identical twins reared together (who share all their genes and most of their environments) are much more similar than fraternal twins reared together (who share half their genes and most of their environments). Biological siblings (who share half their genes and most of their environments) are much more similar than adopted siblings (who share none of their genes and most of their environments).
Many people are so locked into the theory that the mind is a Blank Slate that when they hear these findings they say, "So you're saying it's all in the genes!" If genes have any effect at all, it must be total. But the data show that genes account for about only about half of the variance in personality and intelligence (25% to 75%, depending on how things are measured). That leaves around half the variance to be explained by something that is not genetic.
The next reaction is, "That means the other half of the variation must come from how we were brought up by our parents." Wrong again. Consider these findings. Identical twins separated at birth are not only similar; they are "no less" similar than identical twins reared together. The same is true of non-twin siblings they are no more similar when reared together than when reared apart. Identical twins reared together who share all their genes and most of their family environments-are only about 50% similar, not 100%. And adopted siblings are no more similar than two people plucked off the street at random. All this means that growing up in the same home with the same parents, books, TVs, guns, and so on does not make children similar.
So the variation in personality and intelligence breaks down roughly as follows: genes 50%, families 0%, something else 50%. As with Bob Dylan's Mister Jones, something is happening here but we don't know what it is.
Perhaps it is chance. While in the womb, the growth cone of an axon zigged rather than zagged, and the brain gels into a slightly different configuration. If so, it would have many implications that have not figured into our scientific or everyday way of thinking. One can imagine a developmental process in which millions of small chance events cancel one another out, leaving no difference in the end product. One can imagine a different process in which a chance event could derail development entirely, making a freak or monster. Neither of these happens. The development of organisms must use complex feedback loops rather than blueprints. Random events can divert the trajectory of growth, but the trajectories are confined within an envelope of functioning designs for the species defined by natural selection.
Also, what we are accustomed to thinking of as "the environment" namely the proportion of variance that is not genetic may have nothing to do with the environment. If the nongenetic variance is a product of chance events in brain assembly, yet another chunk of our personalities and intellects would be "biologically determined" (though not genetic) and beyond the scope of the best laid plans of parents and society.
Steven Pinker, research psychologist, is professor in the Department of Brain and Cognitive Sciences at MIT and author of Words and Rules.
will our souls be upgraded?"
The Shakespearean soul will not be able to cope with the innovations and insights of the near future. Star Wars, Star Trek, even Gibson might prove unrealistic not because of their description of hardware, but because of their description of the soul.
Frank Schirrmacher is Publisher, Frankfurter Allgemeine Zeitung and author of Die Darwin AG.
The world is caught up in a paroxysm of change. Key words: globalism, multinational corporations, ethical influences in business, explosive growth of science-based technology, fundamentalism, religion and science, junk science, alternative medicines, rich vs. poor gap, who supports research, where is it done, how is it used, advances in cognition science, global warming, the disconnect between high school and college....these and other influences are undergoing drastic changes and all will have some impact on science, mathematics and technology and therefore on how our schools must change to produce graduates who can function in the 21st century...function and assume positions of leadership. Is it conceivable that the standard curriculum in science and math, crafted in 1893, will still be maintained in the 26,000 high schools of this great nation?
This is a question that obsesses me in my daily activities. I have been agonizing over it along with a few colleagues around Fermilab, University of California, and the students, staff and trustees of the Illinois Math Science Academy (IMSI), a three year public residential high school for gifted students, I was involved in founding some 16 years ago.
Is not our nation even more at risk now than ever? Are not our 2 million teachers even more poorly trained now, even less respected, hardly better compensated than when we were A Nation at Risk? Some 13 years ago, the collected Governors of the United States under the leadership of the President made six promises, all starting with: "By the year 2000 all students will....".
The rhetoric varies from high comedy to dark tragedy. Today, the Glenn National Commission summarizes its dismal study of science and math education in a succinct title: Before its Too Late. Alan Greenspan mesmerizes a congressional panel on Education and the Work Force with the warning that if we do not radically improve our educational system, there is a danger to the future of the nation. Words carefully chosen. Rhetoric. We have no national strategy to address this question. In a war on ignorance and on looming changes of unknowable dimensions, shouldn't we have a strategy?
Leon M. Lederman, the director emeritus of Fermi National Accelerator Laboratory, has received the Wolf Prize in Physics (1982), and the Nobel Prize in Physics (1988). He is the author (with Dick Teresi)of The God Particle: If the Universe Is the Answer, What Is the Question?
"In view of globalization, which is
here to stay, and the events of September 11and its aftermath, which
were a shock to most of us, do we need to make fundamental changes
in our educational goals and methods?"
Because of globalization, the capacity to think across disciplines, to synthesize wide ranges of information efficiently and accurately, to deal with individuals and institutions with which one has no personal familiarity, to adjust to the continuing biological and technological revolutions, are at a far greater premium. And because of the events of September 11, we need to think much more deeply about the nature of democratic institutions and the threats to them, the role and limits of tolerance and civil liberties, the fate of scarce resources, profound gaps across religions and cultures, just to name a few.
time has come where we need to rethink what we teach, how we teach,
what young people learn on their own, how they interact, how they
relate to mass culture, etc. The question we must then ask is: Do
we have to continue to be reactive or can we plan proactively the
education that is needed for our progeny in this new world?
In the world we live in, mathematicians and investors have become ever better at calculating risks, assessing outcomes, laying out possible scenarios. But real economic progress comes from taking challenges, not risks, and building something fantastic *despite* the odds, because you know you're smarter and more dedicated and more persistent, and you can gather and lead a better team, than any rational calculation would indicate. That's how new businesses get built, new markets get opened, new value gets created.
And real political, social and ethical progress, likewise, comes not just from negotiating a carefully calibrated "win-win" balance-of-power compromise, matching move for move, but from taking the lead, challenging the other guy to follow, showing the way forward. We make progress by stretching the imagination and doing things we won't regret. When you cannot predict consequences, then you need to consider your conscience and do what's right.
We need not calculation, but courage!
Esther Dyson is president of EDventure Holdings and editor of the computer-industry newsletter, Release 1.0, and author of the book, Release 2.1: A Design for Living in the Digital Age.
There are, it seems to me, just two fundamental scientific questions that, for very different reasons, we may have no possibility of answering with any certainty.
One question is so fundamental that it is arguably not a scientific question at all: It's the big how and why question of existence itself. Although there are many technical questions still to be answered, as a mathematician, I find myself broadly content with science's explanation of how the physical universe including time itself sprang into being: the symmetry breaking, primordial fireball we call the Big Bang, followed by the subsequent evolution into the universe we see today. But that is simply an explanation of the mechanics of the universe of our experience and perception. It leaves us with a lingering question of how, and perhaps why, the framework arose in which the Big Bang took place in the first place be that framework one in which our universe is the only one there is and has ever been, or one that cycles in "universe time" (whatever that is), or maybe some kind of multiple universe scenario.
I accept that this is not really a scientific question. Science only addresses the how of our own universe, starting just after the Big Bang. But my curiosity, both as a scientist and more generally just as a thinking person, cannot help but dwell from time to time on the biggest question of all the question that for those having a deep religious faith seems to find an answer in the phrase "God made it that way." (An answer that I find even more incomprehensible in a world where millions of human beings believe that that same God authorizes his chosen emissaries to fly jet airliners full of humans into buildings full of other humans.)
My second fundamental question is clearly a genuine scientific matter. In fact, it is a technical question about evolution by natural selection. Exactly how and why did a species (namely, us) develop that has the capacity to think abstractly, that possesses language, and that can reflect on its own existence? Like the big existence issue, this is a question that has enormous significance for us, as humans. And that makes it the more frustrating that we may find ourselves unable ever to answer it with any certainty.
my recent book The Math Gene, I summarized arguments to show
that the possession of language (i.e., a symbolic communication
system with a recursive grammatical structure allowing for the production
and comprehension of meaningful utterances of unlimited length)
and the ability for "offline" thinking (reasoning about the world
in the absence of direct input from the environment and without
the automatic generation of a physical response) are two sides of
the same coin. Implicit in that argument is that this ability also
brings with it the capacity for self-reflective, conscious thought.
(I also argued that such a mental capacity also yields the potential
for mathematical thought.) Thus, we are talking here about the capacity
that makes us human, and in so doing makes us very different from
any other species on Earth.
What makes this question particularly hard is that, at least in terms of functionality (as opposed to brain structure), the acquisition of syntactic structure (i.e., the structure that enables us to create complex sentences or to reason abstractly about the world) is an all-or-nothing event. As linguists have pointed out, you cannot have "half a grammar". True, in theory you can have grammars without, say, passive constructions, but there is no chain of gradually more complex grammars that starts with protolanguage simple subject-predication utterances and leads continuously to the grammatical structure that is common to all human languages. The chain has to start with a sudden jump. Although the acquisition of language was a major functional change in brain capacity, there is no reason why that jump was not the result of a tiny structural change in the brain. But what propelled the brain to reach a stage where such a change could occur? And what exactly was that small structural change? This would surely be a minor technical question about one detail, among thousands, of evolutionary history, were it not for the fact that it was this single change that made us human that made it possible for us to ask these how and why questions, and to care about the answers.
One oft-repeated suggestion for the natural selection advantage that language provides is that it enabled the communication of more complex thoughts and ideas than was previously possible. But that suggestion falls down immediately when you realize that such communication can only arise when the brain that is doing the communicating is able to form those complex thoughts and ideas in the first place, and that capacity itself requires a brain having grammatical structure.
It seems likely that the two sides of this particular coin, thinking complex thoughts and communicating them, arose at the same time, and indeed it could have taken both aspects together to spur the development that led to their acquisition. But we are still left with the tantalizing question that the obvious natural selection advantages this capacity provides only came into play after the capacity was in place. Just what led to and prompted that jump remains a mystery.
There has been, as you might imagine, no shortage of attempts to provide an explanation, but so far I haven't seen one that I find convincing, or even close to convincing. (I mention some in The Math Gene, and give pointers to further reading on the matter.) And even if someone produces a compelling explanation, it seems we will never know for sure. When our early ancestors died, their brains rapidly rotted away, leaving nothing but the skulls that contained them. And even if, by some fluke, we found an intact brain from some early ancestor, buried deep in the ice of a glacier somewhere, how could that help us? Dissecting an object as complex as the human brain tells us virtually nothing about what that brain did how it thought and what it thought about.
Our higher brain functions could just have been an accident. Of course, all evolutionary changes are accidents. What I mean here is that it may be purely accidental that the structural change in the brain that gave us language and abstract, symbolic thought did in fact have that effect. It might just be, as some have suggested, that the brain grew in complexity as a device for cooling the blood, and that language and symbolic thought are mere accidental by-products of the body's need to maintain a certain temperature range. (Certainly, the brain is an extremely efficient cooling device, as illustrated by the fact that putting on a hat is an extremely efficient way of staying warm when we go skiing.) Personally, I don't buy the cooling mechanism explanation. But unless and until someone comes up with something more convincing, I see no way we can rule it out.
For all our huge success in telling the story of how life began and evolved to its present myriad of forms, it seems likely that we may never know for certain exactly what it was that gave us the one thing we value above all else, and the thing that makes us human: our minds. If there is one question I would like to answer above all others, it is this one.
Keith Devlin, mathematician, is a Senior Researcher at Stanford University, and author ot The Math Gene.
different could minds be?"
Plato believed that human knowledge was inborn. Kant and Peirce agreed that much of knowledge had to exist prior to birth or it would be impossible to understand or learn anything. Until quite lately, psychologists were almost uniformly opposed to this notion, insisting that only process not content could be part of our native equipment. Piaget was typical (and highly influential) in asserting that only learning skills and inferential procedures such as deductive rules and schemes for induction and causal analysis were native. He also maintained that these were identical for all people with undamaged minds, and that development of such processes ended with adolescence. Content could be almost infinitely variable because these processes operate on different inputs for different people in different situations and cultures.
But recent work by psychologists provides evidence that some content is universal and native. Theories of mechanics are present by the age of three months and highly elaborated theories of mind and make their appearance before the age of four, are universal, and may also be native. Some anthropologists maintain that schemes for understanding the biological world and even some for understanding the social world are universal and native, as are some knowledge structures for representing the spirit world.
Psychologists and philosophers in this case as well may turn out to be wrong in assuming that all mental processes are universal, native and unalterable. Though early in the 20th century there were claims by Soviet psychologists Vygotsky and Luria that cognitive processes were historically rooted, differentiated by culture, and alterable by education, they were largely ignored. But findings have cropped up from time to time that fit these assertions. Deductive rules may be a trick learned in the process of Western-style education; rational choice procedures may be applied primarily by economists and only in very limited domains by lay people; statistical rules (Piaget's "probability schema") may be used only to a very slight extent by non-Western peoples.
Authors of this year's questions have asked how radical the differences among universes, mathematical systems, and kinds of life might be. How radical could the differences among humans be in basic knowledge structures and inferential procedures? What has to be shared or even inborn? What can be allowed to vary?
Richard Nisbett is Professor of Psychology and Co-Director of the Culture and Cognition Program at the University and author numerous books.
democracy survive complexity?"
As any parent of adolescents has probably experienced, life has become sufficiently complex that emotional maturity by the end of teen years is a thing of the distant past. If adolescence would only be over by 25!
More seriously, for democracy to function representatives need to make critical value trade-offs for citizens. But how can citizens send messages on how they would like their values to drive policies when the issues are so complex that very few citizens and not too many politicians either really understand enough of what might happen and at what probabilities to know how to make decisions that do optimize the value signals from citizens.
The ultimate in irrationality is to make a decision that doesn't even advance your values because the situation is so complex that the decision makers or the public can't see clear connections between specific policies and their potential outcomes (as one who works on the global warming problem I see this conundrum all the time).
The capacity to be literate about scientific and political establishments and their disparate methods of approaching problems is a good start, but such literacy is not widespread and the complexity of most issues sees public and decision-makers alike disconnected from core questions. Educational establishments often call for more content in curriculum to redress this issue, but I think more understanding of context of scientific debate and political and media epistemologies will go further to build the needed literacy.
Stephen H. Schneider is Professor in the Biological Sciences Department at Stanford University and author of Laboratory Earth.
The question of what is "real," defined here as the physical universe, acquires special subtlety from the perspective of brain and cognitive science. The question goes beyond semantic quibbling about the difference between physical stimuli and our perception of them. (Consider the old question, "If a tree falls in the forest, was a sound made if no one is present to hear it?" The answer is "no," because a sound is a sensation that must be perceived by an observer, and no observer was present to hear it.) The startling truth is that we live in a neurologically generated, virtual cosmos that we are programmed to accept as the real thing. The challenge of science is to overcome the constraints of our kludgy, neurological wetware, and understand a physical world that we know only second-hand. In fact, we must make an intuitive leap to accept the fact that there is a problem at all. Common sense and the brain that produces it evolved in the service of our hunter-gatherer ancestors, not scientists.
Sensory science provides the most obvious discrepancies between the physical world and our neurological model of it. Consider these physical to perceptual transformations: photons stimulate the sensations of light and color; chemicals produce tastes and odors; and pressure changes become sounds. Yet, there is no "light" or "color" in the wave or photon structure of electromagnetic radiation, no "sweet" in the molecular structure of sugar, no "sound" in pressure changes, etc. The brain produced these sensory attributes. Sensation is the arbitrary experience that is correlated with a physical stimulus, but is not the physical stimulus itself. Our brain manages these psychophysical transformations in such a convincing manner that we seldom consider that we are sensing a neurological simulation, not physical reality. When do we question the physical meaning of "blue," "pain," or "B-flat?" Consider also the apparent seamlessness of the reality illusion. Using a visual metaphor, our sensory environment is like that of a person trapped in a tiny house, through which the universe must be viewed through peep-holes, one per each sensory channel, such as vision, taste, hearing, etc. From this limited, peep hole vista, we synthesize a seamless, noisy, bright, flavorful, smelly, three dimensional panorama that is an hypothesis of reality. The peep-hole predicament is invisible to us. (Some animals have peep-holes we lack, such as those associated with electric or magnetic field perception.)
Sensory examples are instructive because the nature of the psychophysical linkage is relatively clear. It's easy to imagine sensory limits of bandwidth (the size of our "peephole"), absolute sensitivity, or even modes of sensitivity (our "peep-holes"). Neurological limits on thinking may be as common as those on sensing, but they are more illusive it's hard to think about what you can't think about. A good example from physics is our difficulty in understanding the space-time continuum our intellect fails us when we move beyond the dimensions of height, width, and depth. Other evidence of our neurological reality-generator is revealed by its malfunction in illusions, hallucinations, and dreams, or in brain damage, where the illusion of reality does not simply degrade, but often splinters and fragments.
am I interested in this question? As a neuroscientist, I want to
understand how the brain evolved, developed, and functions. As a
biologist, I believe that all organisms are a theory of their environment,
and it's necessary to understand that environment. As an amateur
astronomer and cosmologist, I want to know the universe in which
I live. To me, physics, biology, neuroscience and psychology are
different approaches to a similar set of perceptual problems. It's
no coincidence that Herman Helmholtz, a great physicist of the past
century, appreciated that you can never separate the observer from
the observed, and became a founder of experimental psychology. The
distinction between psychology and physics is one of emphasis. The
time has come for experimental psychologists to return the favor
and remind physicists that they should be wary of confusing the
physical world with their neurologically generated model of it.
The frontiers of physics may be an exciting playground for the adventurous
cognitive scientist. Ultimately, physics is a study of the behavior
of physicists, scientists trying as best they can to understand
the physical world. The intellectual prostheses of mathematics,
computers, and instrumentation loosen but do not free our species
of the constraints of its neurological heritage. We do not build
random devices to detect stimuli that we cannot conceive, but build
outward from a base of knowledge. A neglected triumph of science
is how far we have come with so flawed an instrument as the human
brain and its sensoria.
there, or should we expect, a fracture in the logical basis on which
people now look for a description of the nexus between particle
physics and cosmology?"
Why: The chief interest of Godel's theorem is that it is a negative answer to one of the questions in David Hilbert's celebrated list of tasks for the twentieth century, put forward at the International Mathematics Congress in Paris in 1900. Mathematicians in the succeeding century seem not to have been unduly incommoded by Godel. But if there were a comparable theorem in fundamental physics, we should have more serious difficulties. Perhaps the circumstance that string theory is getting nowhere (not fast, but slowly) should be taken as a premonition that something is amiss. The search for a Theory of Everything (latterly gone off the boil) may be logically the wild goose chase it most often seems. If science had to abandon the principle that to every event, there is a cause (or causes) , the cat would really be among the pigeons.
Moral: Godel's theorem needs seriously to be re-visited, so that the rest of us can properly appreciate what it means.
Sir John Maddox who recently retired having served 23 years as the editor of Nature, is a trained physicist, and author of What Remains to be Discovered: The Agenda for Science in the Next Century.
space, time, and all other physical quantities only relational?"
What do we actually know about the physical world after the scientific revolution of the last century? Before the XXth century, the picture of the physical world was simple: matter formed by particles (and fields) moving in time over the stage of space, pushed and pulled by forces, according to deterministic equations, which we could write down. That's it.. But the 20th Century has changed all that in depth. Matter has quantum properties: particles can be delocalized -as if they were clouds- although they manifest themselves always as a single point when interacting with us. Space and time are not just curved: they are dynamical entities very much like the electric and magnetic field. Is there a new consistent picture of the physical world, that takes all this new knowledge into account?
The most remarkable aspect of quantum theory is its relational character: elementary quantum events (such as a certain quantum particle being "here") only happen in interactions, and, in a precise sense they are only "real" with respect to, or in relation with, another system. Indeed, I can see the particle "here", but at the same time the particle and I can be in a quantum superposition in which the particle has no precise localization. Thus, a quantum particle is not just "here", but only "here for me".
On the other hand, the most remarkable property of general relativity is that localization in space and time are not defined. Things are only localized with respect to other things. In fact, the spacetime coordinates have no meaning in general relativity, and only quantities that are independent of these coordinates (such as relative localizations) have physical meaning.
Now the question is: are the quantum relationalism (quantum systems have definite properties only in nteracting with other systems) and the general relativistic relationalism (position is only relative) connected to each other? Are they indeed two aspects of the same relationalism?
There is clearly some deep connection. In order to interact quantum mechanically, two systems must be close in space and time, and, viceversa, spacetime contiguity can only be checked via a quantum interaction. So, is perhaps spacetime just the geography of the net of the quantum interactions? Is the world just made of relations?
We are far from understanding all this, and the current highly speculative physical theories haven't even started addressing this kind of questions. But until we address these questions -which are the interesting ones in physics for me- the great revolution of the 20th century is not over. We have lost the old picture of the physical world, but we haven't a new credible one yet.
Carlo Rovelli is a theoretical physicist at the Centre de Physique Theorique in Marseille, France.
bother? Or: Why do we go further and explore new stuff?"
human skills enable an individual to do something with less physiological
effort. If you are good at skiing (and I am not) it takes less energy
to climb that mountain. One can even argue forcefully that a mental
"understanding" of a phenomenon allows one to perceive it with less
increase in brain metabolism.
One could argue that we explore new phenomena to produce skilful insights that will in the future allow us to visit the same phenomena again with less effort. But is that really enough? Can such a functional explanation of creativity as an initial effort devoted to enable a future reduction of the effort really capture the reasons for people to involve themselves in lifelong efforts to understand the world of ants or the intricacy of ski dope?
It seems that president John F. Kennedy captured an essential element in creative efforts when he, in his famous speech at Rice University in 1962, argued for the decision to create the Apollo program: "We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills..."
Indeed, the most important outcome of Apollo offering earthlings an outside view of their planet, visualizing the vulnerability of the Earth and its biosphere was an unintended result of doing a major effort. It did pay off to do something hard. Somehow we know that doing something hard, rather than something easy, is fruitful. But we also know that doing it the hardest way possible (like when I ski) is not a very efficient way of getting anywhere.
We want to be efficient, but also to do difficult things. Why? In a sense this is a rephrasing of Brian Eno's question in Edge 11: "Why Culture"? Many different approaches can be taken involving different disciplines such as economy, anthropology, psychology, evolutionary biology etc.
An idea currently explored in both economy and evolutionary biology could be relevant: Costly signals. They provide the answer to the question: How does one advertise one's own hidden qualities (in the genes or in the bank) in a trustworthy way? By giving a signal that is very costly to produce. One has to have a strong bank account, a very good physiology (and hence good genes) or a strong national R&D programme to do costly things. The more difficult, the better the advertising.
Perhaps we bother because we want to show that we are strong and worthy of mating? Culture is all about doing something that is so difficult that only a healthy individual or society could do it.
If so, it's not at all about reducing the effort, it's all about expanding the effort.
Tor Nørretranders is a science writer, consultant, lecturer and organizer based in Copenhagen, Denmark and author of The User Illusion: Cutting Consciousness Down to Size.
No offense against another human being inflicts greater costs than killing. Simply put, it's bad to be dead. Nonetheless, hundreds of thousands are murdered every year; tens of millions over the past century. From baby killing to genocide, from Susan Smith to Osama bin Laden, people in every culture experience the urge to kill. Some act on it. They do so despite legal injunctions, religious prohibitions, cultural interdictions, the risk of retaliation, and the threat of spending life in a cage. Dead bodies, a trail of grief, and a thirst for vengeance lie in their wake.
Many believe that they already know the answer to the question of cause. But existing theories woefully fail to explain why people murder. Theories that invoke violent media messages, for example, cannot explain the high rates of homicide among tribal cultures that lack media access. Theories that invoke uniquely modern causes cannot explain the paleontological record ancient skulls and skeletons that contain arrow tips, stone projectiles, and brutally inflicted fractures. The stones and bones of the past leave no doubt that murder has been a persistent problem of social living throughout human history. We need to understand why.
David M. Buss is Professor of Psychology at the University of Texas, Austin, and author of Evolutionary Psychology: The New Science of the Mind.
is the difference between the sigmundoscope and the sigmoidoscope?
Less cryptically, how is everyday narrative logic different
from extensional mathematical logic?
In everyday "story logic," how "we," the story-tellers, characterize an event or person is crucial. If a man touches his hand to his eyebrow, for example, we may see this as an indication he has a headache. We may also see the gesture as a signal from a baseball coach to the batter. Then again, we may infer that the man is trying to hide his anxiety by appearing nonchalant, that it is simply a habit of his, that he is worried about getting dust in his eye, or indefinitely many other things depending on indefinitely many perspectives we might have and on the indefinitely many human contexts in which we might find ourselves. A similar open-endedness characterizes the use of probability and statistics in surveys and studies.
Furthermore, unlike mathematical logic, story logic does not allow for substitutions. In mathematical contexts, for example, the number 3 can always be substituted for the square root of 9 or the largest whole number smaller than the constant without affecting the truth of the statement in which it appears. By contrast, although Lois Lane knows that Superman can fly, and even though Superman equals Clark Kent, the substitution of one for the other can't be made. Oedipus is attracted to the woman Jocasta, not to the extensionally equivalent person who is his mother. In the impersonal realm of mathematics, one's ignorance or one's attitude toward some entity does not affect the validity of a proof involving it or the allowability of substituting equals for equals.
John Allen Paulos is Professor of mathematics at Temple University adjunct professor of journalism at Columbia University, and author Once Upon a Number.
"How much can we expect the social
sciences to help build a just and free society?"
Most people understand the social relationships and institutions in which they participate well enough to get the most (which often is not much) out of their participation. The social sciences are, for the most part, a systematized, de-parochialized, professionalized version of this competence that we all have, to a smaller or greater extent, as social actors. As such, the social sciences help us improve our understanding of the social world; they help better understand in particular the point of views of other actors in the same society and of people in other societies. But this enhanced understanding is still shallow, and strikingly weak in predictive power. It is, as far as informing political action, little more than serious journalism without the time pressure. The events of last September provide a telling illustration: What did social scientists have to contribute to our understanding of the events? Did interpretive anthropologists provide a much deeper understanding of the fundamentalist terrorists? Did sociologists give well argued and unexpected predictions as to how the target societies would react? No, the contribution of social scientists was, to say the least, modest. Still, the role of the social sciences as enhancers of common sense social understanding may be modest, but it is crucial in helping people overcome prejudices and biases, and become better citizens not just of their own country, but of the world. Immodest social scientists that presume say what is to be done should not be easily believed.
But might, in the future, a more scientific social science emerge (probably alongside, rather than in place of, the more common sense social sciences that we know)? Its role would not be to ground political action it is not the role of science to say what is good and what is bad but to inform it well enough so that more daring long-sighted political action could be undertaken action that might help build a more just and freer society without being all too likely to have its unforeseen consequences compromise its initial goals, as happened with communism? This is my question. I don't know the answer.
Dan Sperber is a social and cognitive scientist at the French Centre National de la Recherche Scientifique (CNRS) in Paris and author, with Deirdre Wilson, of Relevance: Communication and Cognition.
W. Daniel Hillis is Chairman and Chief Technology Officer of Applied Minds, Inc., a research and development company and author of The Pattern on the Stone.
Brian Eno, an artist, makes and produces records. He has produced U2 ("including this year's award- winning "All That You Can't Leave Behind"), Talking Heads and Devo and collaborated with David Bowie, John Cale, and Laurie Anderson.
unification ever come to a stop?"
Anton Zeilinger is a Professor of Physics at the University of Vienna whose work in quantum teleportation has received world-wide attention.
"How do women's minds work?"
In reality it is, of course, the other way around. Nature has played a cruel trick on men rather than on women. Men's minds, for the most part, work along a single longitudinal path: A triggers B, B triggers C and so forth. They consider themselves to be smart, because they are barely able to grasp causal chains. Men's intelligence is expressed by the extent to which they can estimate or predict a sequence of steps in a chain reaction. Like chess players, some men can think one or two steps ahead, some seven or eight. Alternatives to their one-dimensional, allegedly "logical" path of thinking are beyond their imagination.
Womens minds, on the other hand, are much more complex. Women embrace several different natures in their personality. In addition to the men's straightforward "logical" way of thinking, they (according to C. G. Jung) incorporate a personification of the unconscious counter-sexual image, in other words the inner man in a woman. This archetype encompasses a number of instincts that are quite useful in supplementing a woman's emotions. In addition, women's minds embrace a third governing force, the so-called "shadow", a counter-image of their true character. The working-type woman, for instance, can identify with the feelings of a spoiled tootsie. A woman who has run expeditions in Ethiopia, Somalia and Afghanistan all her life, can suddenly become flustered at the run of a nylon stocking. What makes women so unfathomable to men is that they can leap in a split second from one level of their personality to the other. As a consequence, that charming lady you are flirting with suddenly turns into a sharp-tongued businesswoman, only to react like a helpless college girl in the next moment. It would be asking too much of a man's mind, being merely a simplified, incomplete version of a woman's mind, to be able to comprehend this kind of complexity in the opposite gender.
Of course, one might argue that men also incorporate an anima and a shadow in their personality. So what? The effect of all three personalities is still the same: A unilateral drive towards ambition, competition and ultimately triumph. Let's face it: We men are pathetically simple minded. How simple minded? Swiss author Melina Moser knows the answer. She lists the only three things men need to be happy: Admiration, oral sex and freshly pressed orange juice.
I am convinced that there is a predominant driving force behind cultural progress and that this driving force is speed of communications. The ancestors of modern humans lived in caves and hunted large mammals on essentially the same cultural level for over two million years. The entire history of civilization is limited only to the past 10,000 years.
In my opinion it began when, at the end of the Ice Age, sea level rose, thereby drowning estuaries and creating innumerable natural harbours. A high sea level invited people to climb aboard boats and cross the sea, thus accelerating the exchange of information between different peoples. Knowledge about new discoveries and achievements spread more rapidly and the advance of culture received its first major boost.
Since then, the acceleration of information exchange has driven cultural progress. The wheel, sailing ships, trains, planes, telephones, fax machines followed suit. Finally, the invention of the Worldwide Web caused one of the biggest hysterias in world economics. Today, we can transfer five thousand copies of the entire Encyclopaedia Britannica from (almost) any place on earth to (almost) any other place on earth in only one second and at the maximum possible speed, the speed of light.
After ten thousand years of cultural progress mankind is now reaching the point at which any amount of information can be transferred to any place at the speed of light. The increasing speed of communication, the driving force behind cultural progress since the introduction of husbandry, suddenly becomes irrelevant.
What will happen to progress as this threshold is crossed?
Eberhard Zangger is the geoarchaeologist who uncovered the most plausible explanation for the legend of lost Atlantis of the past 2500 years and author of The Future of the Past.
Much ado has been made lately over the problems of the PC "desktop metaphor," the system of folders and icons included in Macintosh and Windows PCs. Critics of the desktop rightly point out that today's PC users encounter much more information than in the 1980s, when the desktop was first introduced. While I understand these criticisms, I question whether the desktop is really dead in other words, whether the solution really lies in building a better desktop. Instead, I think that the real issue is the increased information, not the interface between it and the user.
Some technologists are ready to discard the old desktop. Last month MIT's Technology Review ran a piece on new software attempting to bypass the desktop metaphor. None of the tools are very convincing. Scopeware, a software package from Mirror Worlds Technologies (founded by David Gelernter, an Edge contributor), essentially removes all file hierarchy by showing files sorted by creation date. While the tool has some nice search features, it's unclear how removing all file hierarchy is an improvement over today's desktop. Other technologies in the article include a two-dimensional graphical "map" of the file system and a 3-D navigable space. These programs try to solve the problem of a cluttered desktop by presenting a new metaphor that could become just as cluttered.
To be sure, there are advances to be made in the tools. Using Microsoft Windows, even briefly, reveals so many interface flaws that it makes me cringe. But fixing these myriad flaws will not address the central issue, which is the tsunami of information arriving into users' PCs. It is the user, not the tool, that should be the focus.
The Wall Street Journal recently interviewed several Americans to inquire about their personal strategies for dealing with their e-mail. Receiving 50 to 150 incoming messages per day, these PC users described the methods they use to stay on top of their information and remain effective in their jobs.
What's interesting about this article is that the Journal recognized e-mail use as a personal activity. Many other business activities, like using approved software or submitting timesheets, may be closely regulated by the IT department but not e-mail. Each user in the article has become conscious of his or her information flow and has created a system to manage it, using the software (albeit flawed) at his or her disposal. The story is about personal needs first, tools second. The industry's response to this problem should be the same. If we could just teach more users to use their tools better, we'd be in far better shape than if we simply churned out yet more complex software.
I would be happy to be proven wrong. Gelernter's Scopeware, for example, could turn out to be a revolutionary advance in curing information anxiety. My guess, however, is that even the best tools will fall short of a cure. We may need a combined strategy of better tools and greater education of users about the nature of a world awash in information. To be effective in coming years, users must assume greater responsibility for their own information management.
Of course, there are problems with that proposition. For one, new desktop metaphors, like the 3-D software, is sexy and makes for interesting press clips. Educating users is decidedly dull. What's worse, there is no easy business plan to educate users en masse in more efficient ways to organize their information. Making a tool that promises to help is so much more profitable. But tools alone won't save us. If all we can do for users is give them a newer, flashier, more distracting interface, then the desktop may indeed be dead forever.
Mark Hurst is the founder of Creative Good, Inc., a leading user experience consulting firm.
Does life on Earth have a future?"
it possible to know what is good and what is evil?"
That entailed, for example, the conclusion that metaphysical knowledge (knowledge of Absolute Reality, or God, as It, He or She exists independently of our perceptual and conceptual apparatus) is unattainable. (Nietzsche called this the "death of God.") But that was not an insuperable problem, because metaphysics was immediately replaced by physics, which had far greater cognitive power to predict, explain and control the phenomena being cognized anyway.
What has been an insuperable problem, up to now, has been the unavailability of any cognitively adequate replacement for ethics. Moral knowledge is unattainable because there is, in principle and by definition, no conceivable moral hypothesis that could possibly be proved or disproved by means of any conceivable type of empirical data, test or experiment. That is true, among other reasons, because moral statements do not take the form of empirically testable hypotheses, or hypothetical imperatives ("If you want X, then you can get it by doing Y" - but with no guidance as to whether you should want X in the first place). Moral statements take the form of value judgments and categorical imperatives (i.e., commandments or orders as to what you should do or want). Commandments can never be true or false, so they cannot communicate knowledge. And value judgments are incapable of communicating knowledge about the external world; the only thing they can express are subjective wishes, tastes and preferences which are, from a logical and epistemological point of view, completely non-rational and arbitrary, matters of whim, about which we can only say De gustibus non disputandum est.
Of course, it has always been known that beauty exists in the eye of the beholder. What had not been seen so clearly, until the scientific revolution, was that the same was true of good and evil. The first modern personality, Hamlet, expressed this clearly in 1601 when he said "There is nothing either good or bad but thinking makes it so." I.e., good and evil are words for subjective preferences, sentiments of approval or disapproval, that exist only in the mind of the beholder. They do not exist as objective realities whose validity can be known or tested, proved or disproved. And Hamlet's fate shows how confused, paralyzed, violent and self-destructive people can become when they have recognized that it is impossible to know what one "should" do, but have not yet discovered how to replace that question with one that is answerable.
Thus, it is not only God (and the Devil) that are dead; more importantly, so are Good and Evil, the abstract philosophical concepts of which the former are the concrete mythological and theological incarnations. As Ivan Karamazov put it (speaking for those for whom God is the only credible and legitimate source of moral authority), "without God anything is possible, everything is permitted." But even those who, following Kant or Rawls, would like to place their faith in pure (a priori) reason, and would trust it to take the place of God as the source of moral knowledge, are doomed to disappointment and ignorance; for even Kant made it clear that moral knowledge was unattainable. As he put it, "I must destroy knowledge in order to make room for faith (Glaube, also translatable as "belief")." That is, even the most dedicated champion of pure (a priori) practical reason as the source of moral knowledge had to admit that moral knowledge is unattainable; all he could put in its place was faith. And by the time he wrote those words, the Age of Faith had long since been dead and buried. Indeed, the whole history of modern science was one long demonstration that knowledge was attainable when, and only when, one replaced faith with its opposite, the attitude of universal doubt, and refused to believe any proposition that had not been tested against empirical evidence.
One inescapable consequence that followed from all this was the loss of credibility of the traditional sources of moral authority (God and pure reason). Why did that create such a crisis that most of human history since the 17th century has been a series of attempts to come to terms with it, both in theory and in practice? Because human nature abhors a cognitive vacuum, especially in the sphere of practical reason. For without some way of answering the questions that practical reason asks, concerning how to live and what to do, humans are totally disoriented and without direction, a condition that is intolerable and panic-inducing. Once they have discovered the cognitive inadequacies of the moral way of formulating those questions and answers, as they have to an increasing extent since the scientific revolution of the 17th century, and have not yet discovered how to progress to a more cognitively adequate form of practical reason, many people will regress to a more intellectually primitive and politically reactionary set of questions and answers. In the 20th century these took the form of political totalitarianism, which led to genocide; more recently, they have taken the form of religious fundamentalism, which has increasingly led to apocalyptic terrorism. Given the existence of weapons of mass destruction, it hardly needs to be stressed how much both of these ideologies potentially threaten the survival of our species.
These political/ideological movements have been widely, and correctly, interpreted as rebellions or reactions against modernity (whether modernity is conceived of as Western civilization, Jewish science, modern technology, religious unbelief, freedom to express any opinion, or whatever), though usually without specifying what it is about modernity that threatens our very existence and survival. The deepest threat, I would maintain, is cognitive chaos in the realm of practical reason, and thus nihilism in the realm of morality, anomie in the realm of law, and anarchy in the realm of politics. The paradox is that the political movements that have been most widely interpreted as nihilistic and "evil" - Nazi, Stalinist and theocratic totalitarianism and their sequelae, genocide and terrorism in fact originated as desperate (and misguided) attempts to ward off nihilism and what their adherents consider "evil." To them, the greatest evil is modernity, on in other words, the modern scientific mentality, which replaces certainty with doubt, dogmatism with skepticism, authority with evidence, faith with agnosticism, coercion with persuasion, violence with words and ideas, and hierarchy with democracy and equality of opportunity all of which fills them with overwhelming dread and terror, amounting to a kind of existential or moral panic.
In fact, to the totalitarian/fundamentalist mind, modernity not only represents absolute evil; it represents something even worse than that, namely, the total absence and delegitimation of any standards of good and evil whatsoever the total death of good and evil, a state of complete anomie and nihilism. For without knowing what is good and evil, how can one know what to do? And without knowing what to do, how can one live (not only biologically, but even mentally)? How can one maintain any mental, emotional, social, cultural or political coherence and order? As Kenneth Tynan remarked, "Hell is not the place of evil; rather, Hell is the absence of any standards at all." That condition is so intolerable to humans that many will regress to even the most irrational and destructive ideology if they cannot find some more epistemologically powerful cognitive structure with which to replace the old moral way of thinking, once its cognitive inadequacy has been so deeply perceived that its credibility has been irreversibly destroyed.
Cognitive growth occurs by finding better and better answers to existing questions. Cognitive development occurs only when one begins to ask a new and different set of questions. We do this only when we notice that our current questions are meaningless because they are unanswerable, so that they need to be replaced with a different set of questions that can be answered. By this point, in the 21st century, we now realize that it is impossible to answer the moral (and legal and political) questions, "How should we live and what ought we to do?" The only questions that are meaningful, in that they can lead to answers that possess cognitive content or knowledge, are the questions "How can we live? i.e., what biological, psychological and social forces, processes and behavior patterns promote, protect and preserve life, and which ones cause death?" For that question can be answered, by means of empirical investigation as to the causes and prevention of the extinction of species (including our own, as by nuclear holocaust or unrestrained devastation of our natural environment), the extermination of social groups (through epidemics of collective violence, such as war, genocide, poverty, famine, etc.), and the deaths of individuals (by means of homicide, suicide, obesity, alcoholism, etc.). In other words, the only possible replacement for ethics or morality that is progressive rather than regressive is the human sciences human biology, psychology and psychiatry, and the social sciences.
Unfortunately, the modern human sciences, unlike the natural sciences, had not yet been invented when the scientific revolution of the 17th century first showed that moral knowledge was unattainable. And even today, the ability of the human sciences to predict, explain and control the objects of their scrutiny (human behavior) is extremely limited, whether compared with that which the natural sciences possess with respect to their objects of study, or with the degree of cognitive power that the human sciences will need to attain if we are to gain the ability to avert the headlong rush to species-wide self-destruction that we currently seem to be embarked upon. In other words, to paraphrase Winston Churchill's remark about democracy, the human sciences are the worst (the least cognitively adequate) of all possible forms of practical reason except for all the others (such as moralism, fundamentalism and totalitarianism)! What that implies is that nothing is more important for the continued survival of the human species than a stupendously increased effort to make progress in the further development of the human sciences, so as to increase our understanding of the causes of the whole range of our own behaviors, from life-threatening (violent) to life-enhancing.
James Gilligan has been on the faculty of the Department of Psychiatry at the Harvard Medical School since 1966. He is the author of Violence: Reflections on a National Epidemic.
space and time fundamental concepts or are they approximations to
other, more subtle, ideas that still await our discovery?"
Brian Greene is a professor of physics and of mathematics at Columbia University and author of The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for an Ultimate Theory.
we ever going to be humble enough to assume that we are mere animals,
like crabs, penguins, and chimpanzees, and not the chosen protégés
of this or that God?"
But theres a problem. The specific set of particles that comprise my body and brain are completely different from the atoms and molecules than comprised me only a short while (on the order of weeks) ago. We know that most of our cells are turned over in a matter of weeks. Even those that persist longer (e.g., neurons) nonetheless change their component molecules in a matter of weeks.
So I am a completely different set of stuff than I was a month ago. All that persists is the pattern of organization of that stuff. The pattern changes also, but slowly and in a continuum from my past self. From this perspective I am rather like the pattern that water makes in a stream as it rushes past the rocks in its path. The actual molecules (of water) change every millisecond, but the pattern persists for hours or even years.
So, perhaps we should say I am a pattern of matter and energy that persists in time.
But there is a problem here as well. We will ultimately be able to scan and copy this pattern in a at least sufficient detail to replicate my body and brain to a sufficiently high degree of accuracy such that the copy is indistinguishable from the original (i.e., the copy could pass a Ray Kurzweil Turing test). I wont repeat all the arguments for this here, but I describe this scenario in a number of documents including the essay "The Law of Accelerating Returns".)
The copy, therefore, will share my pattern. One might counter that we may not get every detail correct. But if that is true, then such an attempt would not constitute a proper copy. As time goes on, our ability to create a neural and body copy will increase in resolution and accuracy at the same exponential pace that pertains to all information-based technologies. We ultimately will be able to capture and recreate my pattern of salient neural and physical details to any desired degree of accuracy.
Although the copy shares my pattern, it would be hard to say that the copy is me because I would (or could) still be here. You could even scan and copy me while I was sleeping. If you come to me in the morning and say, Good news, Ray, weve successfully reinstantiated you into a more durable substrate, so we wont be needing your old body and brain anymore, I may beg to differ.
If you do the thought experiment, its clear that the copy may look and act just like me, but its nonetheless not me because I may not even know that he was created. Although he would have all my memories and recall having been me, from the point in time of his creation, Ray 2 would have his own unique experiences and his reality would begin to diverge from mine.
Now lets pursue this train of thought a bit further and you will see where the dilemma comes in. If we copy me, and then destroy the original, then thats the end of me because as we concluded above the copy is not me. Since the copy will do a convincing job of impersonating me, no one may know the difference, but its nonetheless the end of me. However, this scenario is entirely equivalent to one in which I am replaced gradually. In the case of gradual replacement, there is no simultaneous old me and new me, but at the end of the gradual replacement process, you have the equivalent of the new me, and no old me. So gradual replacement also means the end of me.
However, as I pointed out at the beginning of this question, it is the case that I am in fact being continually replaced. And, by the way, its not so gradual, but a rather rapid process. As we concluded, all that persists is my pattern. But the thought experiment above shows that gradual replacement means the end of me even if my pattern is preserved. So am I constantly being replaced by someone else who just seems a like lot me a few moments earlier?
So, again, who am I? Its the ultimate ontological question. We often refer to this question as the issue of consciousness. I have consciously (no pun intended) phrased the issue entirely in the first person because that is the nature of the issue. It is not a third person question. So my question is not Who is John Brockman? although John may ask this question himself.
When people speak of consciousness, they often slip into issues of behavioral and neurological correlates of consciousness (e.g., whether or not an entity can be self-reflective), but these are third person (i.e., objective) issues, and do not represent what David Chalmers calls the hard question of consciousness.
The question of whether or not an entity is conscious is only apparent to himself. The difference between neurological correlates of consciousness (e.g., intelligent behavior) and the ontological reality of consciousness is the difference between objective (i.e., third person) and subjective (i.e., first person) reality. For this reason, we are unable to propose an objective consciousness detector that does not have philosophical assumptions built into it.
I do say that we (humans) will come to accept that nonbiological entities are conscious because ultimately they will have all the subtle cues that humans currently possess that we associate with emotional and other subjective experiences. But thats a political and psychological prediction, not an observation that we will be able to scientifically verify. We do assume that other humans are conscious, but this is an assumption, and not something we can objectively demonstrate.
I will acknowledge that John Brockman did seem conscious to me when he interviewed me, but I should not be too quick to accept this impression. Perhaps I am really living in a simulation, and John was part of the simulation. Or, perhaps its only my memories that exist, and the actual experience never took place. Or maybe I am only now experiencing the sensation of recalling apparent memories of having met John, but neither the experience nor the memories really exist. Well, you see the problem.
Ray Kurzweil was the principal developer of the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first CCD flat-bed scanner, among other major inventions, and author of The Age of Spiritual Machines.
is life so full of suffering?"
"How do we scale up the number of quality human relationships one person can sustain by many orders of magnitude? In an increasingly connected world, how does one person interact with a hundred thousand, a million or even a billion people?"
Our one fixed resource is time human attention. As we become increasingly networked in the technological sense, we also become more networked in the social sense.
As our social networks scale up, we move more and more of our interactions to the technological sphere. We can have many more telephone interactions than we can have hand-written letter interactions. When we move from telephone to e mail, the number of interactions between people goes up even more dramatically.
Then we pair our e-mail interactions with a personal Web site, and we start moving our personalities into the technology net, as a way of automating and scaling up the number of relationships even further.
We end up with personal CRM systems to handle our increased interaction load, and then add interfaces from our technology net to our human forms. These interfaces will develop from current-day Palm Pilots and Blackberry's to heads-up display style interfaces in glasses and eventually retinal and neuronal interfaces.
"Hi Jerry, Ahh.., we met back in 1989, May 14th at 7pm, and since then we've exchanged 187 e-mails and 39 phone calls. I hope your cousin's daughter Gina had a wonderful graduation yesterday."
The whole range of interactions becomes organized. Introductions from one person to another, and rating systems become automated.
Currently many people run into barriers as their personal networks approach the range of thousands of people. Soon they will move to the tens of thousands, to the millions and beyond.
With these trends, the friction costs of personal introductions go down, and consequently the value of quality measurement and gatekeeping go up dramatically. As the depth of knowledge in a relationship increases, the threshold point at which you 'really know someone' increases also. It's an arms race of intimacy.
Adrian Scott is founder of Ryze, a business networking community. He is a founding investor in Napster, got his Ph.D. in nonlinear optimization at age 20, and has sung with Placido Domingo and performed with the NYC Ballet.
postfeminism, what's next?"
Tracy Quan is a member of the International Network of Sex Work Projects. She is the author of the novel, Diary of a Manhattan Call Girl.
A language dies when there is nobody left to speak it.
By the best estimates, around 6,000 languages are alive in the world today. Half of them, perhaps more, will die in the next century that's 1,200 months from now. So this means that somewhere in the world, a language dies about every two weeks.
Why do languages die? There are many reasons natural disasters (for instance, if an entire village of speakers is killed in a flood, or wiped out in a disease epidemic), social assimilation (speakers cease using their native language and adopt a more popular language in response to economic, cultural, or political pressures). Genocide, colonization, and forced language extinction are causes.
The belief that language diversity is healthy and necessary is often compared to biodiversity, and the idea that a wide array of living species is essential to the planet's well-being.
Michael E. Krauss, of the University of Alaska's Alaska Native
Language Center, extends this analogy to define three stages of
language health in The World's Languages in Crisis:
endangered: "languages which, though now still being learned by children, will if the present conditions continue cease to be learned by children during the coming century," and
safe: languages with 'official state support and very large numbers of speakers.'
If we measure the value of a language simply by the number of people it allows us to communicate with, bigger would always be better, and the death of an endangered language would be of no consequence to the rest of the world. If 128 million people speak French, and roughly 100 people speak Pomo a nearly extinct indigenous language in California then French is exponentially more valuable than Pomo.
But language is not math. Language is embodiment of cultural identity. Language is nuance, context, place, history, ancestry. Language is an animate being; it evolves, it adapts, it grows. Language is the unique, neural fingerprint of a people. Language is a living code that provides structure for human experience. Language is intellectual DNA.
Does diversity of thought and culture matter?
Does human diversity matter?
Then, language matters.
Xeni Jardin is a freelance journalist and conference manager.
Here is a paradox for cognitive neuroscientists: We're trying to understand the brain with the very mental resources that are afforded by our brains. We hope that the brain is simple enough that we can understand it; but it needs to be complex enough for us to be able to understand it.
This is not completely unrelated to Gödel's theorem, which states -roughly- that in any sufficient complex formal system, there exists truths that are inaccessible to formal demonstration. Strictly speaking, Gödel's theorem does not apply to the brain because the brain is not a formal system of rules and symbols. Still, however, it is a fact that the tightly constrained structure of our nervous system constrains the thoughts that we are able to conceive. Our mathematics, for instance, is founded on a small set of basic objects: a number sense, an intuition of space, a simple symbol-manipulation system... Will this small set of representations, crafted by evolution for a very different purpose, suffice to understand ourselves?
I see at least two reasons for hope. First, we seem to have a remarkable capacity for constructing new mental representations through culture. Through metaphor, we are able to connect old representations together in new ways, thus building new mathematical objects that extend our brain's representational power (e.g. Cartesian coordinates, a blend between number and space concepts). Second, and conversely, Nature's bag of tricks doesn't seem so huge. Indeed, this is perhaps the biggest unanswered question: how is it that with a few simple mathematical objects, we are able to understand the outside physical world in such detail? The mystery of this "unreasonable efficacy of mathematics", as Wigner put it, suggests a remarkable adaptation of our brain to the structure of the physical world. Will this adaptation suffice for the brain to understand itself?
Stanislas Dehaene is a cognitive scientist at the Institut National de la Santé and author of The Number Sense: How Mathematical Knowledge Is Embedded In Our Brains.
Humans are, to our knowledge, the only species who can inquire into the nature of nature. So it is not just narcissism that drives our efforts to understand what makes humans different from other animals. Often we are drawn to the great achievements of Homo sapiens in the arts, science, mathematics, and technology, because we view these achievements and the minds that created them as the paragon of what makes us special. The assumption is that these minds got an extra dose of the best of what makes humans human. But several lines of evidence are now coming together to suggest something a bit different and, for many people, more than a bit disturbing.
It is now well known that great achievers are disproportionately likely to suffer from mental illnesses. Severe mental illnesses, particularly bipolar disorder, are much more common among the greatest novelists, poets, painters, and musicians, than among your everyday H. sapiens, especially in recent centuries as the great accomplishments have become more abstract, that is, less normal. A Freudian might explain this association by suppressed social environment that generated both the creativity and their illness. To geneticists, consideration of familial associations suggests a genetic causes. What flows from these perspectives is the dogma that has dominated most of the past century: mental illness and mental creativity result primarily from an interaction between stressful environments and unusual human alleles.
A careful consideration of the evidence and application of natural selection, however, implicate another cause: infectious agents. People with schizophrenia and bipolar disorder, for example, are more likely to be born in late winter or spring, when born in temperate latitudes. This pattern is a smoking gun for prenatal or perinatal infectious causation, which can also explain the known familial associations as well as or better than human genetics. And human genetics does not offer sensible explanations of other aspects of these disease, such as the season-of-birth associations, the urban/rural associations or the high fitness costs associated with the diseases. People with severe mental illnesses commit suicide at a rate that is far too high to allow the maintenance of causal alleles simply by the generation of those alleles through mutation.
Noninfectious environmental influences may help explain some of these associations, but so far as primary causation of severe mental illnesses is concerned, none of the noninfectious environmental or allelic candidates have stood up to the evidence to date as well as infectious candidates. The arguments will eventually move toward resolution through the discovery of the causal agents whether they be alleles, pathogens or some noninfectious environmental influence. Alleles have been claimed as major causes of these diseases but retractions have followed claims as soon as adequate follow-up studies have been conducted. In contrast, evidence for associations between infectious agents and severe mental illnesses has mounted over the past decade in spite of much less funding support.
The associations between mental illness and creativity make sense from an evolutionary perspective. If our minds evolved to solve the challenges associated with hunting/gathering societies, we can expect the normal mind to be poorly equipped to solve some of the accomplishments valued by modern society, whether they be a new style of painting or complex mathematical proofs. If neuronal networks could fire differently, then new mental processes could be generated. Most of the re-networking that accompanies severe mental illnesses makes a person less functional for the tasks valued by society. But every now and then the reorganized brain generates something different, something that we consider extremely valuable. To distinguish this abnormality that we esteem from the abnormality that we pity, we use the term genius. If the geniuses of today were mentally ill at a rate no greater than that of the general population, then we could reasonably assume that genius was simply one tail of the naturally selected distribution of intellectual capacities.
The high rates of mental illness highest achievers, particularly in the arts, however, demand a different explanation. If the illnesses associated with such creativity are caused by infection and the infection cannot be explained as a consequence of the creative lifestyle, as indicated by the season of birth associations, then the range of feasible explanations is narrowed. The least tortuous conclusion is that prenatal infections damage the development of the brain, generating a brain that functions differently from the naturally selected brain. Most of the time these pathogens just muck up the mind, causing mental illness without generating anything in return. But in a few lucky throws of the dice, a different mind that is brilliantly creative.
At this level of accomplishment it is looking more and more like the we in we do not just belong to Homo sapiens but also to a variety of parasitic species. It may include human herpes simplex virus, borna disease virus, Toxoplasma gondii, and many more yet to be discovered species that alter the functioning of our brains, usually for the worse, but occasionally generating minds of unusual insight. Richard Dawkins's concepts of the extended phenotype and meme return with extended license. In addition to viewing characteristics of an organism as an extension of a manipulator species for the benefit of manipulator genes, some characteristics that humans prize as the best of what makes humans human may be side effects that do not actually benefit the manipulator. They are in effect cultural mutations generated as side effects of biological parasitism. Like biological mutations the cultural mutations are often detrimental, but sometimes they may create something that humans value: A Starry Night, The Raven, Nash equilibria, or perhaps even calculus. The devastation associated with these characteristics, which often involves extreme fitness loss suicide with damage rather than benefit to kin cannot be explained by natural selection acting solely upon humans. The principles of natural selection emphasize that we have to consider other species that live intimately within us as part of us, affecting our neurons, shaping our minds.
Paul W. Ewald is a professor of biology at Amherst College and author of Plague Time.
cognitive science change the way we think as much as other sciences
Cognitive science is newer and it is not yet well-known, even among prominent scientists, and the corner of cognitive science I work in cognitive linguistics is even less well-known. Yet its results are just as startling and it has just as much capacity for changing how we think.
As I read through the questions posed by my distinguished colleagues from other disciplines, I realized that the very questions they posed look very different to me as a cognitive linguist than they would to most very well educated Edge readers. It occurred to me that simply commenting on their questions from the perspective of a cognitive linguist would provide some idea of how the world might look different to someone who is acutely aware of the finding of cognitive science, especially cognitive linguistics.
With the greatest of respect for my colleagues who raised the following questions, here is one cognitive scientist's perspective on those questions, given the findings in my discipline.
Feinberg asks: "What is the relationship between being alive and
having a mind?"
Here are some examples of the ways that conceptual thought depends on the peculiarities of the body and brain: Spatial relations concepts arise from structures in the visual system like topographic maps and orientation-sensitive cells. The way we structure events appears to arise from neural schemas for motor control and perception in the prefrontal cortex. Abstract reasoning makes use of embodied reasoning via metaphoric projections from the sensory motor system to higher cortex. Our vast system of primary conceptual metaphors appears to develop spontaneously during childhood because just about all children have certain recurrent experiences in the world.
In short, without a body with a brain functioning in the world, there are no concepts and there is no mind. Computers don't think. They don't understand. They just compute.
Myers: "Why do we fear the wrong things?"
Taylor asks: "Is morality relative or absolute?"
Deutsch asks : "How are moral assertions connected with the world
Douglas Rushkoff: "Are stories the only way we have of interpreting our world meaning that the forging of a collective set of mutually tolerant narratives is the only route to a global civilization?"
In a word, yes. Interestingly enough, the kinds of stories that defined civilizations seem rather restricted in character, as do the kinds of stories that define what a possible "history" is. It is, of course, an open question as to whether a "global civilization" is possible, or even desirable. Diversity is a crucial value.
Jordan Pollack asks: "Is there Progress?"
What constitutes "progress" depends on your conceptual system, especially your moral system. I happen to agree with Pollack that the Bush administration is morally regressive and that things have gotten much worse in the past year, especially since September 11. But that is because I (and probably Pollack) accept nurturant morality. If you see the world in terms of strict father morality, as George W. Bush does, then from the perspective, there has been "progress."
There is of course a difference between scientific progress and human progress. As Bill Joy has observed, there is some scientific "progress" that represents a huge backward step.
Knobel asks: ""Do we want to live in one world, or two?"
Cognitive science is important here, because of certain myths arising from moral conceptual systems, conceptual framing, and economic metaphors. Here's what those myths are and how they work:
The Market Myth: The market is seen metaphorically as a force of nature that works optimally and that is it "unnatural" and dangerous to "tinker with." It follows that, if the market determines the value of your labor, that is "natural," "fair," and a consequence of an optimal system.
This metaphor is disastrously at odd with how markets actually work. Markets are constructed; for example, it took more than 900 pages of regulations to build and constrain the global market of the WTO. The stock market is constructed and maintained by the SEC and other institutions.
Moral Self-Interest: Strict father morality has a moral version of Adam Smith's invisible hand metaphor: If everyone pursues his own well-being, then the well being of all will be maximized. This has the corollary: Being a "go-gooder" (not pursuing your own self-interest) screws up the system.
The Bootstrap Myth: In American, everyone can pull himself up by his bootstraps succeed if he works hard enough.
These myths work in concert in a disastrous way. In the U.S., about one-quarter of the population (roughly, those without health care) performs difficult and absolutely essential work that, because of the structure of the economy, they cannot be paid fairly for caring for children and the elderly, house cleaning, picking fruits and vegetables, working in fast-food joints, doing day labor, and on and on. Without them, our economy could not function. These workers make possible the lifestyles of the upper three-quarters of the population. Yet, for the most part, the economy is such that their employers cannot afford to pay them a wage commensurate with their contribution to the economy.
The result is what I call the "two-tier economy."
Though any one person might be able to pull himself up by his bootstraps, one-quarter of the economy cannot. For this society to run, some quarter of the population has to do work that cannot be paid commensurate with its value to the economy.
This is actually a failure of the way our economy is set up. Since lower-tier workers effectively work to keep the economy going, they should be paid by the economy as a whole via the way markets are commonly constructed and tweaked, via the tax code. Provide for a negative income tax. The money is there in the economy.
Why doesn't this happen? Partly because of the greed and power of the wealthy. But also because of the three myths given above. They hide the nature of the problem and its solution.
Globally, of course, the situation is much worse. Our current regulations constructing the global marketplace are unethical. An ethical globalization one based on an ethics of care is needed.
Sabbagh asks: "Would an extra-terrestrial civilization develop
the same mathematics as ours? If not, how could theirs possibly
Mathematics especially shows a dependence on the details of human bodies and brains, as Núñez and I show in Where Mathematics Comes From. Number arises from very special neural circuitry. Advanced mathematical ideas arise from a long series of interlocking conceptual metaphors. The most important of these is what we refer to as the Basic Metaphor of Infinity, which allows one to use finite experience to metaphorically characterize the idea of actual infinity which stands outside the experience of finite beings. A vast portion of modern mathematics depends on this metaphor.
Wertheim: "How can we understand the fact that such complex and
precise mathematical relations inhere in nature?"
Mathematics makes use of the same conceptual apparatus used by the human mind generally, which allows for mathematical ideas ideas grounded in our bodies and that mostly make use of metaphor. Mathematical ideas, like other ideas, don't go floating around in the air. Those ideas arise from human brains that evolved to run human bodies and don't exist outside those brains.
The neural capacity to link ideas to symbols is central to mathematics. Computation is made possible by neural mappings that link mathematical ideas to their symbolizations, in such a way that conceptual inferences can be mirrored by symbolic computations.
Scientists are astute observers of nature. They use their conceptual systems to understand nature and to classify natural phenomena and to reason about them. Science uses ideas like change, size, proportion, inversion, and so on. Mathematics uses the same ideas, mapped precisely onto symbolizations. Thus, there are physical phenomena that change in inverse proportion to their size and there is a mathematics that expresses the same ideas with accompanying computations. The correlation between the mathematics and the world occurs in the mind of the scientist, because scientists understand the world in terms of ideas, and those very ideas either occur in the conceptual system of existing mathematics or scientists make up a new mathematics to mathematicize those ideas.
Paul Bloom: "How will people think about the soul?"
David Gelertner: "Why is religion so important to most Americans and so trivial to most intellectuals?"
John Horgan: "Do we want the God machine?"
Religion has many aspects at least the following, which
cognitive science has something to say about:
First, the personification metaphors, centering on God as Parent, typically a father. Eve Sweetser has observed that if you take the properties of the father (progenitor, authority figure, powerful person, protector, he loves you, etc.), you get the other commonplace metaphors for God (creator, lawgiver, king or lord, shepherd, lover, and so on).
Second, the same basic metaphor of infinity that underlies actual infinity in mathematics characterizes God as infinite: all-knowing, all-powerful, first cause, the highest good.
Third, the immanence metaphor: God is the world. (Do not say God is not in the stone; God is in the stone!) Most traditions have immanent versions (e.g., Kabalistic Judaism), and immanence seems central to Buddhism.
The Explanatory Aspect: Religions claim to answer fundamental questions: Where did we come from? What is the future? Is there life after death? Are we mortal or immortal? Do we have a soul? Religions commonly have prophets, who offer such explanations. Explanations come in the form of rich metaphorical narratives. The highest calling is to know God, or seek to do so, according to whatever metaphor for God one is using.
The Moral Aspect: Religions are fundamentally moral. They tell you how to live, what is good or bad. They often use the metaphor of Moral Accounting in one way or another, with good and bad deeds added up and balanced. This is often tied up with issues of either Karma (moral accounting with the universe) or reward and punishment in an afterlife. In addition, there are saints (figures who set examples for us to follow), devils (evil-doers who examples for us not to follow), and martyrs (who have suffered for the religion and thus gain extra credit). Following a religion is not easy, and involves considerable responsibility and discipline.
The Experiential Aspect: Forms of spiritual experience, which we now know are physical in character brain states. Religious experience is also communal, and communities are vital to religion.
From cognitive science, we know that thought, perception, and even personality are embodied in the brain: you can't think, see, or be who you are without appropriate neural activity in the right parts of the brain. Thus, if you had a disembodied soul that could live on after death, it couldn't see (without a visual cortex), couldn't hear (without an auditory cortex), couldn't feel (with none of the brain's emotional centers), couldn't have empathy (with no mirror neurons), wouldn't have a memory, and wouldn't have your personality (without the right prefrontal cortex). In short, it wouldn't be much of anything, certainly not much of you.
It is easy to debunk aspects of religion, like religious explanations and notions of the soul, and in cases like creationism, it is important to do so. But there are very good cognitive reasons that people find meaning in religion and believe religions. Religions fit common metaphors. Religions provide moral guidance for life that makes sense because religions use common metaphors for morality morality as accounting (summing up good and bad), purity, uprightness (heaven is up, hell is down), and so on. Religions provide spiritual practice, which is a seen as a way to gain knowledge (of God), to connect with the infinite (God), and if followed, can lead to spiritual experience (a real physical experience), involving a sense of the elimination of boundaries and of connectedness with others and with the universe. Religions also provide a spiritual community, in which one can connect with others dedicated to the same ideals.
"Do we want the God machine?" No. The point of religion is the practice, the path, the moral life, and the connection with others and the world in one's everyday life. The end point makes no sense and has no point without traveling the path. The God machine will be ignored by those for whom religion in all its aspects is important.
It is often been observed that science has many of the properties of religion. Science seems to take the form of a religion based on the immanence metaphor, with God as the universe and the highest calling being to understand the universe (to know God). Many central questions of science come from religion: Where did we come from? (The Big Bang) What is the future will the universe keep expanding? Is the universe finite or infinite? From this perspective, the drive for a single unified Theory of Everything is metaphorically the drive to know a single God.
Issues of immortality are central to science, with Reputation metaphorically playing the role of the Soul in some respects. Seeking knowledge is moral behavior, and making important discoveries is doing Good. The reward can be immortality your reputation can live forever. If you win a Nobel Prize, it is there forever, whether you are or not; it makes you one of the immortals. There are saints Einstein, Darwin, Newton, etc. and saints' lives. There are even relics (Einstein's brain) and reliquaries (Who got Einstein's office?).
The explanations science offers are metaphorical. Conceptual metaphors preserve inferences, which makes them useful for science. But scientific differ from the metaphors of religion because they presuppose empirical observation and science uses very special metaphors that that not only preserve inference but that are mathematicized, that is, that have a symbolic calculus attached, which allows for calculations and predictions. Einstein's great metaphor in general relativity was that time is a spatial dimension and that gravity is curvature in space-time. The metaphor yields a beautiful, predictive mathematics, but is no consolation when you fall and hurt your knee and are told by a literal Einsteinian that no force acted on it; rather, it moved along a geodesic in space-time.
A common metaphor in physics is What Exists Is What Can Be Observed, which lies behind Lee Smolin's contribution. Then there are the proposed new metaphors that come out in the questions:
"Are the laws of nature a form of computer code that needs and
uses error correction."
"Is information the basic building-block of the universe?"
These are, of course, serious proposals to use new metaphors and the mathematics that goes with them to yield laws of nature that make better predictions.
A reasonable answer to David Gelertner's question is that scientists do have a religion, science itself. As an immanence religion, in which God is the Universe, the Universe becomes sacred, understanding the universe becomes a form of knowing God, scientific practice is religious practice, scientific discipline is devotion, the "work trance" of the scientist is a form of meditation, scientific discovery is moral action, a good reputation is a reward for moral action, and the immortality of Reputation is the Immortality of Soul.
Science also has its cult aspects. They peek through occasionally in Edge discussions. Sometimes, when I read Edge, I feel like I'm standing at the supermarket checkout counter reading the National Enquirer's stories about sightings of extraterrestrials.
"Is God nothing more than a sufficiently advanced extraterrestrial intelligence?"
could be a National Enquirer headline.
One would like to think that the belief in extra-terrestrial scientists and mathematicians is just a lack of education in cognitive science. That's not a field that most physical scientists or mathematicians are trained in or even read. They don't learn all the amazing details that go into the embodiment of concepts concept by concept. They don't learn about the staggering number of biological accidents that had to happen for cells to develop, and then neurons, and then neural "computation," and neural networks, and then all the myriad of further accidents required to get specialized neural structures to run bodies, and after that to develop concepts and reasoning biologically. Most scientists don't learn the details and so don't know that the probability of anything like this happening twice is virtually zero, despite the billions of stars out there. But I'm not so sure that mere education would help.
There are reasons why ordinary folks believe in the soul, and there are similar reasons why so many scientists believe in extra-terrestrials with mathematics just like human mathematics. Believing in the soul does not just allow for comforting beliefs that you will someday be reunited with loved ones who have died and that you will get your reward for being good in heaven. It also has a basis in experience, oddly enough.
Consider the phenomenon of hearing yourself think. When you hear another person, there is an external sensory input coming from the other person. But, though your thought is not something you can perceive, there are neural connections linking ideas with the brain centers for sound production and perception. The acoustic cortex can be activated not just by external stimuli, but also by brain-internal neural connections. When you "hear yourself think" the neural activation is coming from inside the brain, but the experience, in part, is similar to hearing a stimulus originating outside the body. It is as though "you" are hearing another person express thoughts another "person" inside you, separate from your body. It is that experience that makes it sensible to think in terms of a "soul" inside you, capable of thought, but separate from your body.
The popular belief in extraterrestrials also has natural cognitive origins. It seems to have arisen from the idea of the exotic foreigner. Ming the Magnificent in Flash Gordon movies was made up to look Asian. The reasoning seems to be: If there are people from other countries, who looked vaguely like Westerners, but with somewhat different features, language, and culture, there could be such folks from other planets. The variations on extraterrestrials in science fiction films go from Spock to Klingons to featureless creatures that commandeer our bodies. There is an overlap as well with other otherworldly creatures angels and devils. A common folk theory is that it is human emotions that make us human; so Spock, for example, has no emotions nor do machine- and insect-like extra-terrestrials.
But the most common theme is that that extraterrestrials are foreigners explorers, exiles, or conquerors basically like us, with language, reason, mathematical and scientific abilities, good and evil motives, as well as bilateral symmetry, heads, eyes, ears, mouths, arms, legs, and so on.
What interests me, as a cognitive scientist, is the physical scientists' version, and the arguments usually given.
The Hubris Argument: The progress of science is a move away from human beings being at the center of the universe, starting with Copernicus. This is one more step. This is a scientific, and hence anti-religious progression away from human beings being special, being made uniquely in the image of God and being the unique inheritors of the material world. The idea of extraterrestrials make us more modest modesty is a moral trait and to even suggest that human beings might be the only intelligent species in the universe is to show enormous and inappropriate hubris.
The Probability + Evolution Argument: First, the probability argument: There are billions and billions of stars in the universe and some small percentage of them have planets, and some small percentage of the planets have the right chemical composition, atmosphere and clime for life. Even if that percentage is small, the number of stars is so large that the probability is high that the chemical and climactic conditions for life exist else whether in the universe.
Then the Evolution argument: Once the chemicals and the climate and atmosphere are right, then evolution takes over. Evolution is a natural universal process in which more complex molecules are formed from less complex molecules and a certain percentage are stable. The process produces more and more complex molecules, until DNA-like molecules capable of reproduction (and hence life) are produced and start reproducing. Evolution then takes over. More complex life forms are randomly produced and a certain percentage survive and reproduce further. It is assumed that organisms with higher complexity will be able to survive better than those with lower complexity, so that evolution will naturally progress toward more complex organisms. Eventually organisms with some intelligence will be randomly produced, and since they will have an evolutionary advantage, they will survive. The process will repeat, with more and more complex intelligent organisms being produced, until they become intelligent like us, and develop mathematics and science, which allow them to adapt optimally.
The Math in the World Argument: It is assumed that
the physical universe works according to fixed laws stateable
in mathematical terms, and that the mathematics inheres in
the material world (logarithmic spirals in snails and nebulae,
Fibonnacci series in flowers, quadratic equations in home runs).
It is further assumed that these laws are the same everywhere
in the universe. Thus intelligent beings who survived via evolution
to function in the world, must have acquired the same mathematics.
The Probability + Evolution Argument leaves out the probabilities for the evolution of biological structures of the precise form capable of "computing" just the right kinds of ideas to reason with in general, as well as just the right ideas for the relevant mathematics and for characterizing the symbolization of those ideas. Those biological mechanisms and neural structures are so peculiar and complex that the probability is effectively zero that the precise biological structures for the right mathematical ideas will evolve ever again anywhere. Those astronomically low probabilities are always left out of the argument.
Well, the counterargument goes, it happened once. How to you know it couldn't happen again. Obviously we don't. But that's not the point. A scientific argument must be positive. It must be an argument from knowledge, not from lack of knowledge. No serious scientific argument has ever been given that takes the relevant cognitive science into account.
In short, the physical scientists who believe in extraterrestrial intelligence are arguing in a way they would never get away with arguing in their serious scientific fields. Why?
The answer is that their belief in extra-terrestrial mathematicians who have our mathematics fits an important myth, an identity-defining myth that Núñez and I, in Where Mathematics Comes From, called The Romance of Mathematics. The Romance goes like this:
Mathematical entities and relations really exist. They structure this universe and any possible universe. The physical universe works according to mathematical laws that inhere in the universe itself, independent of any beings. Correct reason is a form of mathematical logic, which is a form of mathematics. Since the universe is structured rationally, mathematical logic inheres in it too. Human mathematics is part of the abstract, transcendent mathematics. A mathematic proof is a discovery of a universal truth. It thus takes one beyond the merely human and puts one in touch with transcendent truth. To learn mathematics is thus to learn the language of nature, a mode of thought that would have to be shared by any highly intelligent beings anywhere in the universe. Because mathematics is disembodied and rational thought is a form of mathematical logic, intelligent thought can exist outside of living beings. Thus, machines can in principle think.
Every part of this Romance is false. It contradicts what we know from cognitive science and neuroscience. But it serves an important role in the "religion" of many mathematicians and physical scientists. Indeed, it is one of the defining narratives of that religion.
If God is taken in the immanent sense as being the universe, the Romance says that those, and only those, who know mathematics can understand the universe and thus, metaphorically, "know God." They are "seers" who can see what ordinary folks cannot. Mathematics, according to the Romance, takes you beyond yourself, to the realm of the transcendent. Science and mathematics are therefore sacred activities, and scientists and mathematicians become high priests of their religion. They deserve not just with respect, but awe. Great mathematicians and physical scientists are therefore special beings, like saints. As such, they can communicate with the angelsthe extraterrestrial scientists and mathematicians of superior intelligence.
It is unlikely that most people will give up the soul on the basis of what is known about cognitive science and neuroscience. It is too much part of who they are. It is part of a concept of self-identity that is physically in their brains and not likely to change.
It is just as unlikely that most mathematicians and physical scientists will give up on the Romance and their own religious identity just because cognitive scientists and neuroscientists have found that the Romance is scientifically untenable. The Romance is also part of their understanding of who they are, and as such, is physically instantiated in the brains of many mathematicians and scientists. That is why they believe in extra-terrestrials and why that belief is not likely to change.
cognitive science change the way we think?
George Lakoff is Professor of Linguistics at the University of California at Berkeley and author of Where Mathematics Comes From (with Rafael Núñez).
'folk concepts' of the mind have anything to do with what really
happens in the brain?''
For hundreds of years the pattern in science has been to overturn folk concepts, and it seems to me the brain may be the next field for such a conceptual revolution. It may be that in a hundred years people will speak of free will, or the unconscious, or emotion, in the way that we now speak of "sunrise" or "forever" words that serve for day-to-day talk, but don't map reality. We know the sun doesn't rise because it is the earth that moves and we know that humanity and its planet and the universe itself won't last forever. I see signs that concepts of the mind are due for the same sort of revision. And so that's the question I keep returning to.
David Berreby writes about science and culture, His work has appeared in The New York Times Magazine, The New Republic, Slate, The Sciences and many other publications.
non-sustainable developments (i.e., atmospheric change, deforestation,
fresh water us, etc.) become halted in pleasant ways of our choice,
or in unpleasant ways not of our choice?"
Jared M. Diamond is Professor of Physiology at the UCLA School of Medicine, is the Pulitzer Prize-winning author of the widely acclaimed Guns, Germs, and Steel: the Fates of Human Societies.