believe that the outlines of a new narrative are becoming
visible—a story in which cooperative arrangements,
interdependencies, and collective action play a more prominent
role and the essential (but not all-powerful) story of competition
and survival of the fittest shrinks just a bit.
(i) The human brain is the most complex entity in the known universe;
(ii) With this marvelous product of evolution we will be successful in eventually discovering all that there is to discover about the physical world, provided of course, that some catastrophic event doesn't terminate our species; and
(iii) Science provides the best means to attain this ultimate goal.
When the scientific endeavor is considered in relation to the obvious limitations of the human brain, the knowledge we have gained in all fields to date is astonishing. Consider the well-documented variability in the functional properties of neurons. When recordings are made from a single cell—for instance in the visual cortex to a flashing spot of light—one can't help but be amazed by the trial-to-trial variations in the resulting responses.
one trial this simple stimulus might elicit a high frequency
burst of discharges, while on the next trial there could
be just a hint of a response. The same thing is apparent
when EEG recordings are made from the human brain. Brain
waves change in frequency and amplitude in seemingly random
fashion even when the subject is lying in a prone position
without any variations in behavior or the environment.
So how does the brain do it? How can it function as effectively as it does given the "noise" inherent in the system? I don't have a good answer, and neither does anyone else, in spite of the papers that have been published on this problem. But in line with the second of the three beliefs I have listed above, I am certain that someday this question will be answered in a definitive manner.
I think that the notions of space and time will turn out to be useful only within some approximation. They are similar to a notion like "the surface of the water" which looses meaning when we describe the dynamics of the individual atoms forming water and air: if we look at very small scale, there isn't really any actual surface down there. I am convinced space and time are like the surface of the water: convenient macroscopic approximations, flimsy but illusory and insufficient screens that our mind uses to organize reality.
In particular, I am convinced that time is an artifact of the approximation in which we disregard the large majority of the degrees of freedom of reality. Thus "time" is just the reflection of our ignorance.
I am also convinced, but cannot prove, that there are no objects, but only relations. By this I mean that I am convinced that there is a consistent way of thinking about nature, that refers only to interactions between systems and not to states or changes of individual systems. I am convinced that this way of thinking nature will end up to be the useful and natural one in physics.
Beliefs that one cannot prove are often wrong, as proven by the fact that this Edge list contains contradictory beliefs. But they are essential in science and often healthy. Here is a good example from 25 centuries ago: Socrates, in Plato's Phaedon says:
Finally, I am also convinced, but cannot prove, that we humans have an instinct to collaborate, and that we have rational reasons for collaborating. I am convinced that ultimately this rationality and this instinct of collaboration will prevail over the shortsighted egoistic and aggressive instinct that produces exploitation and war. Rationality and instinct of collaboration have already given us large regions and long periods of peace and prosperity. Ultimately, they will lead us to a planet without countries, without wars, without patriotism, without religions, without poverty, where we will be able to share the world. Actually, maybe I am not sure I truly believe that I believe this; but I do want to believe that I believe this.
I think, as did Gödel, that the continuum hypothesis is false. No-one will ever prove it false from the presently accepted axioms of set theory. Chris Freiling's proposed new (1986) axioms prove it false, but they are not regarded as intuitive.
I think human-level artificial intelligence will be achieved.
What do I believe is true even though I cannot prove it? This question has a double edge and needs two answers.
First, and most simply: "everything". On a strict Popperian reading, all the things I "know" are only propositions that I have not yet falsified. They are best estimates, hypotheses that, so far, make sense of all the data that I possess. I cannot prove that my parents were married on a certain day in a certain year, but I claim to "know" that date quite confidently. Sure, there are documents, but in fact in their case there are different documents that present two different dates, and I recall the story my mother told to explain that and I believe it, but I cannot "prove" that I am right. I also know Newton's Laws and indeed believe them, but I also now know their limitations and imprecisions and suspect that more surprises may lurk in the future.
But that's a generic answer and not much in the forward-looking and optimistic spirit that characterizes Edge. So let me propose this challenge to practitioners of my own historical craft. I believe that there are in principle better descriptions and explanations for the development and sequence of human affairs than human historians are capable of providing. We draw our data mainly from witnesses who share our scale of being, our mortality, and for that matter our viewpoint. And so we explain history in terms of human choices and the behavior of organized social units. The rise of Christianity or the Norman Conquest seem to us to be events we can explain and we explain them in human-scale terms. But it cannot be excluded or disproved that events can be better explained on a much larger time scale or a much smaller scale of behavior. An outright materialist could argue that all my acts, from the day of my birth, have been a determined result of genetics and environment. It was fashionable a generation ago to argue a Freudian grounding for Luther's revolt, but in principle it could as easily be true and, if we could know it, more persuasive to demonstrate that his acts were determined a the molecular and submolecular level.
The problem with such a notion is, of course, that we are very far from being able to outline such a theory, much less make it persuasive, much less make it something that another human being could comprehend. Understanding even one other person's life at such microscopic detail would take much more than one lifetime.
So what is to be done? Of course historians will constantly struggle to improve their techniques and tools. The advance of dendrochronology (dating wood by the tree rings, and consequently dating buildings and other artifacts far more accurately than ever before) can stand as one example of the way in which technological advance can tell us things we never knew before. But we will also continue to write and to read stories in the old style, because stories are the way human beings most naturally make sense of their world. An awareness of the powerful possibility of whole other orders of possible description and explanation, however, should at least teach us some humility and give us some thoughtful pause when we are tempted to insist too strongly on one version of history—the one we happen to be persuaded is true. Even a Popperian can see that this kind of intuition can have beneficial effect.
Although I can't prove it, I believe that thanks to new kinds of social modeling, that take into account individual motives as well as group goals, we will soon grasp in a deep way how collective human behavior works, whether it's action by small groups or by nations. Any predictive power this understanding has will be useful, especially with regard to unexpected outcomes and even unintended consequences. But it will not be infallible, because the complexity of such behavior makes exact prediction impossible.
I believe that intelligent life may presently be unique to our Earth, but that, even so, it has the potential to spread through the galaxy and beyond—indeed, the emergence of complexity could still be near its beginning. If SETI searches fail, that would not render life a cosmic sideshow Indeed, it would be a boost to our cosmic self-esteem: terrestrial life, and its fate, would become a matter of cosmic significance. Even if intelligence is now unique to Earth, there's enough time lying ahead for it to spread through the entire Galaxy, evolving into a teeming complexity far beyond what we can even conceive.
There's an unthinking tendency to imagine that humans will be around in 6 billion years, watching the Sun flare up and die. But the forms of life and intelligence that have by then emerged would surely be as different from us as we are from a bacterium. That conclusion would follow even if future evolution proceeded at the rate at which new species have emerged over the 3 or 4 billion years of the geological past. But post-human evolution (whether of organic species or of artefacts) will proceed far faster than the changes that led to emergence, because it will be intelligently directed rather than being—like pre-human evolution—the gradual outcome of Darwinian natural selection. Changes will drastically accelerate in the present century—through intentional genetic modifications, targeted drugs, perhaps even silicon implants in to the brain. Humanity may not persist as a single species for more than a few centuries—especially if communities have by then become established away from the earth.
But a few centuries is still just a millionth of the Sun's future lifetime—and the entire universe probably has a longer future still. The remote future is squarely in the realm of science fiction. Advanced intelligences billions of years hence might even create new universes. Perhaps they'll be able to choose what physical laws prevail in their creations. Perhaps these beings could achieve the computational capability to simulate a universe as complex as the one we perceive ourselves to be in.
My belief may remain unprovable for billions of years. It could be falsified sooner—for instance, we (or our immediate post-human descendents) may develop theories that reveal inherent limits to complexity. But it's a substitute for religious belief, and I hope it's true.
This is a treacherous question to ask, and a trivial one to answer. Treacherous because the shoals between the written lines can be navigated by some to the conclusion that truth and religious belief develop by the same means and are therefore equivalent. To those unfamiliar with the process by which scientific hunches and hypotheses are advanced to the level of verifiable fact, and the exacting standards applied in that process, the impression may be left that the work of the scientist is no different than that of the prophet or the priest.
Of course, nothing could be further from reality.
The whole scientific method relies on the deliberate, high magnification scrutiny and criticism by other scientists of any mechanisms proposed by any individual to explain the natural world. No matter how fervently a scientist may "believe'"something to be true, and unlike religious dogma, his or her belief is not accepted as a true description or even approximation of reality until it passes every test conceivable, executable and reproducible. Nature is the final arbiter, and great minds are great only in so far as they can intuit the way nature works and are shown by subsequent examination and proof to be right.
With that preamble out of the way, I can say that for me personally, this is a trivial question to answer. Though no one has yet shown that life of any kind, other than Earthly life, exists in the cosmos, I firmly believe that it does. My justification for this belief is a commonly used one, with no strenuous exertion of the intellect or suspension of disbelief required.
Our reconstruction of early solar system history, and the chronology of events that led to the origin of the Earth and moon and the subsequent development of life on our planet, informs us that self-replicating organisms originated from inanimate materials in a very narrow window of time. The tail end of the accretion of the planets—a period known as "the heavy bombardment"—ended about 3.8 billion years ago, approximately 800 million years after the Earth formed. This is the time of formation and solidification of the big flooded impact basins we readily see on the surface of the Moon, and the time when the last large catastrophe-producing impacts also occurred on the Earth. In other words, the terrestrial surface environment didn't settle down and become conducive to the development of fragile living organisms until nearly a billion years had gone by.
However, the first appearance of life forms on the Earth, the oldest fossils we have discovered so far, occurred shortly after that: around 3.5 billion years ago or even earlier. The interval in between—only 300 millions years and less than the time represented by the rock layers in the walls of the Grand Canyon—is the proverbial blink of the cosmic eye. Despite the enormous complexity of even the simplest biological forms and processes, and the undoubtedly lengthy and complicated chain of chemical events that must have occurred to evolve animated molecular structures from inanimate atoms, it seems an inevitable conclusion that Earthly life developed very quickly and as soon as the coast was clear long enough to do so.
Evidence is gathering that the events that created the solar system and the Earth, driven predominantly by gravity, are common and pervasive in our galaxy and, by inductive reasoning, in galaxies throughout the cosmos. The cosmos is very, very big. Consider the overwhelming numbers of galaxies in the visible cosmos alone and all the Sun-like stars in those galaxies and the number of habitable planets likely to be orbiting those stars and the ease with which life developed on our own habitable planet, and it becomes increasingly unavoidable that life is itself a fundamental feature of our universe ... along with dark matter, supernovae, and black holes.
I believe we are not alone. But it doesn't matter what I think because I can't prove it. It is so beguiling a question, though, that humankind is presently and actively seeking the answer. The search for life and so-called "habitable zones" is becoming increasingly the focus of our planetary explorations, and it may in fact transpire one day that we discover life forms under the ice on some moon orbiting Jupiter or Saturn, or decode the intelligible signals of an advanced, unreachably distant, alien organism. That will be a singular day indeed. I only hope I'm still around when it happens.
believe that we are writing software the wrong way. There
are sound evolutionary reasons for why we are doing what
we are doing—that we can call the "programming
the problem in a computer language" paradigm, but the
incredible success of Moore's law blinded us to being stuck
in what is probably an evolutionary backwater. There are
many warning signs. Computers are demonstrably ten thousand
times better than not so long ago. Yet we are not seeing
their services improving at the same rate (with some exceptions—for
example games and internet searches.) On an absolute scale,
a business or administration problem that would take maybe
one hundred pages to describe precisely, will take millions
of dollars to program for a computer and often the program
will not work. Recently a smaller airline came to a standstill
due to a problem in crew scheduling software—raising
the ire of Congress, not to mention their customers. My laptop
could store 200 pages of text (1/2 megabytes) for each and
every crew member at this airline just in its fast memory
and hundred times more (a veritable encyclopedia of 20,000
pages) for each person on its hard disk. Of course for a
schedule we would need maybe one or two—or at most
ten pages per person. Even with all the rules—the laws,
the union contracts, the local, state, federal taxes, the
duty time limitations, the FAA regulations on crew certification;
is there anyone who believes that the problem is not simple
in terms of computing? We need to store and process at the
maximum 10 pages per person where we have capacity for two
thousand times more in one cheap laptop! Of course the problem
is complex in terms of the problem domain—but not shockingly
so. I would estimate that all the rules possibly relevant
to aircraft crew scheduling are expressible in less than
a thousand pages—or 1/2 of one percent of the fast
believe in the creative power of boredom.
Here is an observation from mathematical practice. By now the concept of an algorithm, well- defined, is widely hailed as the way to solve problems, more precisely sequences of problems labeled by a numerical parameter. The implementation of a specific algorithm may be boring, a task best left to a machine, while the construction of the algorithm together with a rigorous proof that it works is a creative and often laborious enterprise.
For illustration consider group theory. A group is defined as a structure consisting of a non-empty set and a binary operation obeying certain laws. The theory of groups consists of all sentences true of all groups; its restriction to the formal "first order" language L determined by the group structure is called the elementary theory TG of groups. Here we have a formal proof procedure, proven complete by Gödel in his PHD thesis the year before his incompleteness proof was published. The elementary theory of groups is axiomatizable: it consists of exactly those sentences that are derivable from the axioms by means of the rules of first order logic. Thus TG is an effectively (recursively) enumerable subset of L; a machine, unlimited in power and time, could eventually come up with a proof of every elementary theorem of group theory. However, a human group theorist would still be needed to select the interesting theorems out of the bulk of the merely true. The development of TG is no mean task, although its language is severely restricted.
The axiomatizability of a theory always raises the question how to recognize the non-theorems. The set FF of those L-sentences that fail in some finite group is recursively enumerable by an enumeration of all finite groups, a simple matter, in principle. But, as all the excitement over the construction of finite simple monsters has amply demonstrated, that again is in reality no simple task.
Neither the theory of finite groups nor the theory of all groups is decidable. The most satisfying proof of this fact shows how to construct to every pair (A, B) of disjoint recursively enumerable sets of L-sentences, where A contains all of TG and B contains FF, a sentence S that belongs neither to A nor to B. This is the deep and sophisticated theorem of effective non-separability proved in the early sixties independently by Mal'cev in the SSSR and Tarski's pupil Cobham.
It follows that constructing infinite counter-examples in group theory is a truly creative enterprise, while the theory of finite groups is not axiomatizable and so, to recognize a truth about finite groups requires deep insight and a creative jump. The concept of finiteness in group theory is not elementary and yet we have a clear idea of what is meant by talking about all finite groups, a marvelously intriguing situation.
To wind up with a specific answer to the 2005 Question:
I do believe that every sentence expressible in the formal language of elementary group theory is either true of all finite groups or else fails for at least one of them.
This statement may at first sight look like a logical triviality. But when you try to prove it honestly you find that you would need a decision procedure, which would, given any sentence of L, yield either a proof that S holds in all finite groups or else a finite group in which S fails. By the inseparability theorem mentioned above, there is no such procedure.
asked whether I hold the equivalent belief for the theory
of all groups I would hesitate because the concept of an
infinite counterexample is not as concrete to my mind as
that of the totality of all finite groups. These are the
areas where personal intuition starts to come into play.
goes a long way towards imbuing substance and processes
with meaning—describing life as "matter reaching
towards divinity," or as the process through which
divinity calls matter back up into itself—but theologians
repeatedly make the mistake of ascribing this sense of
purpose to history rather than the future. This is only
natural, since the narrative structures we use to understand
our world tend to have beginnings, middles, and ends. In
order to experience the pay-off at the end of the story,
we need to see it as somehow built-in to the original intention
That's why it's so important to recognize that evolution, at its best, is a team sport. As Darwin's later, lesser-known, but more important works contended, survival of the fittest is not a law applied to individuals, but to groups. Just as it is now postulated that mosquitoes cause their victims to itch and sweat nervously so that other mosquitoes can more easily find the target, most great leaps forward in human evolution—from the formation of clans to the building of cities—are feats of collaborative effort. Better rates of survival are as much a happy side effect of good collaboration as their purpose.
If we could stop relating to meaning and purpose as artifacts of some divine creative act, and see them instead as the yield of our own creative future, they become goals, intentions, and processes very much in reach—rather than the shadows of childlike, superstitious mythology
The proof is impossible, since it is an unfolding one. Like reaching a horizon, arrival merely necessitates more travel.
like to propose a modified Many Universes theory. Rather
than saying every possible universe exists, I'd say, rather,
that there is a sequence of possible universes, akin to
the drafts of a novel.
I believe, but cannot prove, that memory is inherent in nature. Most of the so-called laws of nature are more like habits.
There is no need to suppose that all the laws of nature sprang into being fully formed at the moment of the Big Bang, like a kind of cosmic Napoleonic code, or that they exist in a metaphysical realm beyond time and space.
Before the general acceptance of the Big Bang theory in the 1960s, eternal laws seemed to make sense. The universe itself was thought to be eternal and evolution was confined to the biological realm. But we now live in a radically evolutionary universe.
If we want to stick to the idea of natural laws, we could say that as nature itself evolves, the laws of nature also evolve, just as human laws evolve over time. But then how would natural laws be remembered or enforced? The law metaphor is embarrassingly anthropomorphic. Habits are less human-centred. Many kinds of organisms have habits, but only humans have laws.
Habits are subject to natural selection; and the more often they are repeated, the more probable they become, other things being equal. Animals inherit the successful habits of their species as instincts. We inherit bodily, emotional, mental and cultural habits, including the habits of our languages.
The habits of nature depend on non-local similarity reinforcement. Through a kind of resonance, the patterns of activity in self-organizing systems are influenced by similar patterns in the past, giving each species and each kind of self-organizing system a collective memory.
Is this just a vague philosophical idea? I believe it can be formulated as a testable scientific hypothesis.
My interest in evolutionary habits arose when I was engaged in research in developmental biology, and was reinforced by reading Charles Darwin, for whom the habits of organisms were of central importance. As Francis Huxley has pointed out, Darwin's most famous book could more appropriately have been entitled The Origin of Habits.
Over the course of fifteen years of research on plant development, I came to the conclusion that for understanding the development of plants, their morphogenesis, genes and gene products are not enough. Morphogenesis also depends on organizing fields. The same arguments apply to the development of animals. Since the 1920s many developmental biologists have proposed that biological organization depends on fields, variously called biological fields, or developmental fields, or positional fields, or morphogenetic fields.
All cells come from other cells, and all cells inherit fields of organization. Genes are part of this organization. They play an essential role. But they do not explain the organization itself. Why not?
Thanks to molecular biology, we know what genes do. They enable organisms to make particular proteins. Other genes are involved in the control of protein synthesis. Identifiable genes are switched on and particular proteins made at the beginning of new developmental processes. Some of these developmental switch genes, like the Hox genes in fruit flies, worms, fish and mammals, are very similar. In evolutionary terms, they are highly conserved. But switching on genes such as these cannot in itself determine form, otherwise fruit flies would not look different from us.
Many organisms live as free cells, including many yeasts, bacteria and amoebas. Some form complex mineral skeletons, as in diatoms and radiolarians, spectacularly pictured in the nineteenth century by Ernst Haeckel. Just making the right proteins at the right times cannot explain such structures without many other forces coming into play, including the organizing activity of cell membranes and microtubules.
Most developmental biologists accept the need for a holistic or integrative conception of living organization. Otherwise biology will go on floundering, even drowning, in oceans of data, as yet more genomes are sequenced, genes are cloned and proteins are characterized.
I suspect that morphogenetic fields work by imposing patterns on the otherwise random or indeterminate patterns of activity. For example they cause microtubules to crystallize in one part of the cell rather than another, even though the subunits from which they are made are present throughout the cell.
Morphogenetic fields are not fixed forever, but evolve. The fields of Afghan hounds and poodles have become different from those of their common ancestors, wolves. How are these fields inherited? I believe, but cannot prove, that they are transmitted by a kind of non-local resonance, and I have suggested the term morphic resonance for this process.
The fields organizing the activity of the nervous system are likewise inherited through morphic resonance, conveying a collective, instinctive memory. The resonance of a brain with its own past states also helps to explain the memories of individual animals and humans.
Social groups are likewise organized by fields, as in schools of fish and flocks of birds. Human societies have memories that are transmitted through the culture of the group, and are most explicitly communicated through the ritual re-enactment of a founding story or myth, as in the Jewish Passover celebration, the Christian Holy Communion and the American thanksgiving dinner, through which the past become present through a kind of resonance with those who have performed the same rituals before.
Others may prefer to dispense with the idea of fields and explain the evolution of organization in some other way, perhaps using more general terms like "emergent systems properties". But whatever the details of the models, I believe that the natural selection of habits will play an essential part in any integrated theory of evolution, including not just biological evolution, but also physical, chemical, cosmic, social, mental and cultural evolution.
I have a belief that modern humans are greatly under-utilising their cognitive capabilities. Finding proof of this, however, would lie in embracing those very same sentient possibilities—visceral hunches—which were possibly part of the world of archaic humans. This enlarged realm of the senses acknowledges reason, but also heeds the grip of the gut, the body poetic.
I also believe that my belief about scientific theories isn't itself scientific. Science itself doesn't decide how it is to be interpreted, whether realistically or not.
That the penetration into unobservable nature is accomplished by way of abstract mathematics is a large part of what makes it mystifying—mystifying enough to be coherently if unpersuasively (at least to me) denied by scientific anti-realists. It's difficult to explain exactly how science manages to do what it is that I believe it does—notoriously difficult when trying to explain how quantum mechanics, in particular, describes unobserved reality. The unobservable aspects of nature that yield themselves to our knowledge must be both mathematically expressible and connected to our observations in requisite ways. The seventeenth-century titans, men like Galileo and Newton, figured out how to do this, how to wed mathematics to empiricism. It wasn't a priori obvious that it was going to work. It wasn't a priori obvious that it was going to get us so much farther into nature's secrets than the Aristotelian teleological methodology it was supplanting. A lot of assumptions about the mathematical nature of the world and its fundamental correspondence to our cognitive modes (a correspondence they saw as reflective of God's friendly intentions toward us) were made by them in order to justify their methodology.
I also believe that since not all of the properties of nature are mathematically expressible—why should they be? it takes a very special sort of property to be so expressible—that there are aspects of nature that we will never get to by way of our science. I believe that our scientific theories—just like our formalized mathematical systems (as proved by Gödel)—must be forever incomplete. The very fact of consciousness itself (an aspect of the material world we happen to know about, but not because it was revealed to us by way of science) demonstrates, I believe, the necessary incompleteness of scientific theories.
But I further believe (and cannot prove) that hostility toward religion is an obstacle to progress in psychology. Most human beings live in a world full of magic, miracles, saints, and constant commerce with divinity. Psychology at present has little to say about these parts of life; we focus instead on a small set of topics that are fashionable, or that are particularly tractable with our favorite methods. If psychologists took religious experience seriously and tried to understand it from the inside, as anthropologists do with other cultures, I believe it would enrich our science. I have found religious texts and testimonials about purity and pollution essential for understanding the emotion of disgust.
I cannot prove that electrons exist, but I believe fervently in their existence. And if you don't believe in them, I have a high voltage cattle prod I'm willing to apply as an argument on their behalf. Electrons speak for themselves.
W. DANIEL HILLIS
know that it sounds corny, but I believe that people are
getting better. In other words, I believe in moral progress.
It is not a steady progress, but there is a long-term trend
in the right direction—a two steps forward, one step
back kind of progress.
ROBERT R. PROVINE
Human Behavior is Unconsciously Controlled.
Until proven otherwise, why not assume that consciousness does not play a role in human behavior? Although it may seem radical on first hearing, this is actually the conservative position that makes the fewest assumptions. The null position is an antidote to philosopher's disease, the inappropriate attribution of rational, conscious control over processes that may be irrational and unconscious. The argument here is not that we lack consciousness, but that we over-estimate the conscious control of behavior. I believe this statement to be true. But proving it is a challenge because it's difficult to think about consciousness. We are misled by an inner voice that generates a reasonable but often fallacious narrative and explanation of our actions. That the beam of conscious awareness that illuminates our actions is on only part of the time further complicates the task. Since we are not conscious of our state of unconsciousness, we vastly overestimate the amount of time that we are aware of our own actions, whatever their cause.
My thinking about unconscious control was shaped by my field studies of the primitive play vocalization of laughter. When I asked people to explain why they laughed in a particular situation, they would concoct some reasonable fiction about the cause of their behavior—"someone did something funny," "it was something she said," "I wanted to put her at ease." Observations of social context showed that such explanations were usually wrong. In clinical settings, such post hoc misattributions would be termed "confabulations," honest but flawed attempts to explain one's actions.
Subjects also incorrectly presumed that laughing is a choice and under conscious control, a reason for their confident, if bogus, explanations of their behavior. But laughing is not a matter speaking "ha-ha," as we would choose a word in speech. When challenged to laugh on command, most subjects could not do so. In certain, usually playful, social contexts, laughter simply happens. However, this lack of voluntary control does not preclude a lawful pattern of behavior. Laughter appears at those places where punctuation would appear in a transcription of a conversation—laughter seldom interrupts the phrase structure of speech. We may say, "I have to go now—ha-ha," but rarely, "I have to—ha-ha—go now." This punctuation effect is highly reliable and requires the coordination of laughing with the linguistic structure of speech, yet it is performed without conscious awareness of the speaker. Other airway maneuvers such as breathing and coughing punctuate speech and are performed without speaker awareness.
The discovery of lawful but unconsciously controlled laughter produced by people who could not accurately explain their actions led me to consider the generality of this situation to other kinds of behavior. Do we go through life listening to an inner voice that provides similar confabulations about the causes of our action? Are essential details of the neurological process governing human behavior inaccessible to introspection? Can the question of animal consciousness be stood on its head and treated in a more parsimonious manner? Instead of considering whether other animals are conscious, or have a different, or lesser consciousness than our own, should we question if our behavior is under no more conscious control than theirs? The complex social order of bees, ants, and termites documents what can be achieved with little, if any, conscious control as we think of it. Is machine consciousness possible or even desirable? Is intelligent behavior a sign of conscious control? What kinds of tasks require consciousness? Answering these questions requires an often counterintuitive approach to the role, evolution and development of consciousness.
MacNamara once proposed that children come to learn about
right and wrong, good and evil, in much the same way that
they learn about geometry and mathematics. Moral development
is not merely cultural learning, and it does not arise from
innate principles that have evolved through natural selection.
It is not like the development of language or sexual preference
or taste in food.
Psychologist, Emeritus Professor, Stanford University; Author, Shyness
I believe that the prison guards at the Abu Ghraib Prison in Iraq, who worked the night shift in Tier 1A, where prisoners were physically and psychologically abused, had surrendered their free will and personal responsibility during these episodes of mayhem.
But I could not prove it in a court of law. These eight army reservists were trapped in a unique situation in which the behavioral context came to dominate individual dispositions, values, and morality to such an extent that they were transformed into mindless actors alienated from their normal sense of personal accountability for their actions—at that time and place.
The "group mind" that developed among these soldiers was created by a set of known social psychological conditions, some of which are nicely featured in Golding's Lord of the Flies. The same processes that I witnessed in my Stanford Prison Experiment were clearly operating in that remote place: Deindividuation, dehumanization, boredom, groupthink, role-playing, rule control, and more. Beyond the relatively benign conditions in my study, in that Iraqi prison, the guards experienced extreme fatigue and exhaustion from working 12-hour shifts, 7 days a week, for over a month at a time with no breaks.
There was fear of being killed from mortar and grenade attacks and from prisoners rioting. There was revenge for buddies killed, and prejudice against these foreigners for their strange religion and cultural traditions. There was encouragement by staff "to soften up" the detainees for interrogation because Tier 1A was the Interrogation-Soft Torture center of that prison. Already in place when these young men and women arrived there for their tour of duty were abusive practices that had been "authorized" from the top of the chain of command: Use of nakedness as a humiliation tactic, sensory and sleep deprivation, stress positions, dog attacks, and more.
In addition to the situational variables and processes operating in that behavioral setting were a serious of systemic processes that created the barrel into which these good soldiers were forced to live and work. Most of the reports of independent investigation committees cite a failure of leadership, lack of leadership, or irresponsible leadership as factors that contributed to these abuses. Then there was lack of mission-specific training of the guards, no oversight, no accountability to senior officers, poor resources, overcrowded facilities, confusing commands from civilian interrogators at odds with the CIA, military intelligence and other agencies and agents all working in Tier 1A without clear communication channels and much confusion.
I was recently an expert witness for the defense of Sgt. Ivan "Chip" Frederick in his Baghdad trial. Before the trial, I spent a day with him, giving him an in-depth interview, checking all background information, and arranging for him to be psychologically assessed by the military. He is one of the alleged "bad apples" who these investigations have labeled as "morally corrupt." What did he bring into that situation and what did that situation bring into him?
He seemed very much to be a normal young American. His psych assessments revealed no sign of any pathology, no sadistic tendencies, and all his psych assessment scores are in the normal range, as is his intelligence. He had been a prison guard at a small minimal security prison where he performed for many years without incident. So there is nothing in his background, temperament, or disposition that could have been a facilitating factor for the abuses he committed at the Abu Ghraib Prison.
After a four-day long trial, part of which included my testimony elaborating on the points noted here, the Judge took barely one hour to find him guilty of all eight counts and to sentence Sgt. Frederick to 8 years in prison, starting in solitary confinement in Kuwait, dishonorable discharge, broken in rank from Sgt. to Pvt., loss of his 20 years retirement income and his salary. This military judge held Frederick personally responsible for the abuses, because he had acted out of free will to intentionally harm these detainees since he was not forced into these acts, was not mentally incompetent, or acting in self-defense. All of the situational and systemic determinants of his behavior and that of his buddies were disregarded and given a zero weighting coefficient in assessing causal factors.
The real reason for the heavy sentence was the photographic documentation of the undeniable abuses along with the smiling abusers in their "trophy photos." It was the first time in history that such images were publicly available of what goes on in many prisons around the world, and especially in military prisons. They humiliated the military, and the entire chain of command all the way up the ladder to the White House. Following this exposure, investigations of all American military prisons in that area of the world uncovered similar abuses and worse, many murders of prisoners. Recent evidence has revealed that similar abuses started taking place again in Abu Ghraib prison barely one month after these disclosures became public—when the "Evil Eight Culprits" were in other prisons—as prisoners.
on more than 30 years of research on "The Lucifer
Effect"—the transformation of good people
into perpetrators of evil—I believe that
there are powerful situational and systemic forces
operating on individuals in certain situations
that can undercut a lifetime of morality and
rationality. The Dionysian aspect of human nature
can triumph over the Apollonian, not only during
Mardi Gras, but in dynamic group settings like
gang rapes, fraternity hazing, mob riots, and
in that Abu Ghraib prison. I believe in that
truth in general and especially in the case of
Sgt. Frederick, but I was not able to prove it
in a military court of law.
Strangely, I believe that cockroaches are conscious. That is probably an unappealing thought to anyone who switches on a kitchen light in the middle of the night and finds a family of roaches running for cover. But it's really shorthand for saying that I believe that many quite simple animals are conscious, including more attractive beasts like bees and butterflies.
I can't prove that they are, but I think in principle it will be provable one day and there's a lot to be gained about thinking about the worlds of these relatively simple creatures, both intellectually—and even poetically. I don't mean that they are conscious in even remotely the same way as humans are; if that we were true the world would be a boring place. Rather the world is full of many overlapping alien consciousnesses.
do I think they might be multiple forms of conscious out
there? Before becoming a journalist I spent 10 years and
a couple of post-doctoral fellowships getting inside the
sensory worlds of a variety of insects, including bees and
cockroaches. I was inspired by A Picture Book
of Invisible Worlds, a slim out-of-print volume by Jakob
von Uexkull (1864-1944).
I studied time studying how honey bees could find their way around my laboratory room (they had learnt to fly in through a small opening in the window) and find a hidden source of sugar. Bees could learn all about the pattern of key features in the room and would show they were confused if objects were moved around when they were out of the room. They were also easily distracted by certain kinds of patterns, particularly ones with lots of points and lines that had very abstract similarities to the patterns on flowers, as well as by floral scents, and by sudden movements that signalled danger. In contrast, when they were busy gorging on the sugar almost nothing could distract them, making it possible for myself to paint a little number on their backs so I distinguish individual bees.
To make sense of this ever changing behaviour, with its shifting focus of attention, I always found it simplest to figure out what was happening by imagining the sensory world of the bee, with its eye extraordinarily sensitive to flicker and colours we can't see, as a "visual screen" in the same way I can sit back and "see" my own visual screen of everything happening around me, with sights and sounds coming in and out of prominence. The objects in the bees world have significances or "meaning" quite different from our own, which is why its attention is drawn to things we would barely perceive.
That's what I mean by consciousness—the feeling of "seeing" the world and its associations. For the bee, it is the feeling of being a bee. I don't mean that a bee is self-conscious or spends time thinking about itself. But of course the problem of why the bee has its own "feeling" is the same incomprehensible "hard problem" of why the activity of our nervous system gives rise to our own "feelings".
But at least the bee's world is very visual and capable of being imagined. Some creatures live in sensory worlds that are much harder to access. Spiders that hunt at night live in a world dominated by the detection of faint vibration and of the tiniest flows of air that allow them to see fly passing by in pitch darkness. Sensory hairs that cover their body give them a sensitivity to touch far more finely grained than we can possibly feel through our own skin.
To think this way about simple creatures is not to fall into the anthropomorphic fallacy. Bees and spiders live in their own world in which I don't see human-like motives. Rather it is a kind of panpsychism, which I am quite happy to sign up to, at least until we know a lot more about the origin of consciousness. That may take me out of the company of quite a few scientists who would prefer to believe that a bee with a brain of only a million neurones must surely be a collection of instinctive reactions with some simple switching mechanism between then, rather have some central representation of what is going on that might be called consciousness. But it leaves me in the company of poets who wonder at the world of even lowly creatures.
wrote the haiku poet Issa.
And as for the cockroaches, they are a little more human than the spiders. Like the owners of the New York apartments who detest them, they suffer from stress and can die from it, even without injury. They are also hierarchical and know their little territories well. When they are running for it, think twice before crushing out another world.
Libbrecht, chairman of the Cal tech physics department is
a world expert on ice crystal formation, a hobby project
he took on more than twenty years ago precisely because as
he puts it "there are six billion people on this planet,
and I thought that at least one of us should understand how
snow crystals form." After two decades of meticulous
experimentation inside specially constructed pressurized
chambers Libbrecht believes he has made some headway in understanding
how ice crystallizes at the edge of the quasi-liquid layer
which surrounds all ice structures. He calls his theory "structure
dependent attachment kinetics," but he is quick to point
out that this is far from the ultimate answer. The transition
from water to ice is a mysteriously complex process that
has engaged minds as brilliant as Johannes Kepler and Michael
Faraday. Libbrecht hopes he can add the small next step in
our knowledge of this wondrous substance that is so central
to life itself.
I am not even saying "elsewhere in the universe." If the proposition I believe to be true is to be proved true within a generation or two, I had better limit it to our own galaxy. I will bet on its truth there.
I believe in the existence of life elsewhere because chemistry seems to be so life-striving and because life, once created, propagates itself in every possible direction. Earth's history suggests that chemicals get busy and create life given any old mix of substances that includes a bit of water, and given practically any old source of energy; further, that life, once created, spreads into every nook and cranny over a wide range of temperature, acidity, pressure, light level, and so on.
Believing in the existence of intelligent life elsewhere in the galaxy is another matter. Good luck to the SETI people and applause for their efforts, but consider that microbes have inhabited Earth for at least 75 percent of its history, whereas intelligent life has been around for but the blink of an eye, perhaps 0.02 percent of Earth's history (and for nearly all of that time without the ability to communicate into space). Perhaps intelligent life will have staying power. We don't know. But we do know that microbial life has staying power.
Now to a supposition: that Mars will be found to have harbored life and harbors life no more. If this proves to be the case, it will be an extraordinarily sobering discovery for humankind, even more so than the view of our fragile blue ball from the Moon, even more so than our removal from the center of the universe by Copernicus, Galileo, and Newton—perhaps even more so than the discovery of life elsewhere in the galaxy.
The world of our daily experience—the world of tables, chairs, stars and people, with their attendant shapes, smells, feels and sounds—is a species-specific user interface to a realm far more complex, a realm whose essential character is conscious. It is unlikely that the contents of our interface in any way resemble that realm. Indeed the usefulness of an interface requires, in general, that they do not. For the point of an interface, such as the windows interface on a computer, is simplification and ease of use. We click icons because this is quicker and less prone to error than editing megabytes of software or toggling voltages in circuits. Evolutionary pressures dictate that our species-specific interface, this world of our daily experience, should itself be a radical simplification, selected not for the exhaustive depiction of truth but for the mutable pragmatics of survival.
If this is right, if consciousness is fundamental, then we should not be surprised that, despite centuries of effort by the most brilliant of minds, there is as yet no physicalist theory of consciousness, no theory that explains how mindless matter or energy or fields could be, or cause, conscious experience. There are, of course, many proposals for where to find such a theory—perhaps in information, complexity, neurobiology, neural darwinism, discriminative mechanisms, quantum effects, or functional organization. But no proposal remotely approaches the minimal standards for a scientific theory: quantitative precision and novel prediction. If matter is but one of the humbler products of consciousness, then we should expect that consciousness itself cannot be theoretically derived from matter. The mind-body problem will be to physicalist ontology what black-body radiation was to classical mechanics: first a goad to its heroic defense, later the provenance of its final supersession.
The heroic defense will, I suspect, not soon be abandoned. For the defenders doubt that a replacement grounded in consciousness could attain the mathematical precision or impressive scope of physicalist science. It remains to be seen, of course, to what extent and how effectively mathematics can model consciousness. But there are fascinating hints: According to some of its interpretations, the mathematics of quantum theory is itself, already, a major advance in this project. And perhaps much of the mathematical progress in the perceptual and cognitive sciences can also be so interpreted. We shall see.
The mind-body problem may not fall within the scope of physicalist science, since this problem has, as yet, no bona fide physicalist theory. Its defenders can surely argue that this penury shows only that we have not been clever enough or that, until the right mutation chances by, we cannot be clever enough, to devise a physicalist theory. They may be right. But if we assume that consciousness is fundamental then the mind-body problem transforms from an attempt to bootstrap consciousness from matter into an attempt to bootstrap matter from consciousness. The latter bootstrap is, in principle, elementary: Matter, spacetime and physical objects are among the contents of consciousness.
The rules by which, for instance, human vision constructs colors, shapes, depths, motions, textures and objects, rules now emerging from psychophysical and computational studies in the cognitive sciences, can be read as a description, partial but mathematically precise, of this bootstrap. What we lose in this process are physical objects that exist independent of any observer. There is no sun or moon unless a conscious mind perceives them, for both are constructs of consciousness, icons in a species-specific user interface. To some this seems a patent absurdity, a reductio of the position, readily contradicted by experience and our best science. But our best science, our theory of the quantum, gives no such assurance. And experience once led us to believe the earth flat and the stars near. Perhaps, in due time, mind-independent objects will go the way of flat earth.
view obviates no method or result of science, but integrates
and reinterprets them in its framework. Consider, for instance,
the quest for neural correlates of consciousness (NCC). This
holy grail of physicalism can, and should, proceed unabated
if consciousness is fundamental, for it constitutes a central
investigation of our user interface. To the physicalist,
an NCC is, potentially, a causal source of consciousness.
If, however, consciousness is fundamental, then an NCC is
a feature of our interface correlated with, but never causally
responsible for, alterations of consciousness. Damage the
brain, destroy the NCC, and consciousness is, no doubt, impaired.
Yet neither the brain nor the NCC causes consciousness. Instead
consciousness constructs the brain and the NCC. This is no
mystery. Drag a file's icon to the trash and the file is,
no doubt, destroyed. Yet neither the icon nor the trash,
each a mere pattern of pixels on a screen, causes its destruction.
The icon is a simplification, a graphical correlate of the
file's contents (GCC), intended to hide, not to instantiate,
the complex web of causal relations.
In a 1757 essay, philosopher David Hume argued that because "the general principles of taste are uniform in human nature" the value of some works of art might be essentially eternal. He observed that the "same Homer who pleased at Athens and Rome two thousand years ago, is still admired at Paris and London." The works that manage to endure over millennia, Hume thought, do so precisely because they appeal to deep, unchanging features of human nature.
Some unique works of art, for example, Beethoven's Pastoral Symphony, possess this rare but demonstrable capacity to excite the human mind across cultural boundaries and through historic time. I cannot prove it, but I think a small body of such works—by Homer, Bach, Shakespeare, Murasaki Shikibu, Vermeer, Michelangelo, Wagner, Jane Austen, Sophocles, Hokusai—will be sought after and enjoyed for centuries or millennia into the future. As much as fashions and philosophies are bound to change, these works will remain objects of permanent value to human beings.
These epochal survivors of art are more than just popular. The majority of works of popular art today are not inevitably shallow or worthless, but they tend to be easily replaceable. In the modern mass art system, artistic forms endure, while individual works drop away. Spy thrillers, romance novels, pop songs, and soap operas are daily replaced by more thrillers, romance novels, pop songs, and soap operas. In fact, the ephemeral nature of mass art seems more pronounced than ever: most popular works are incapable of surviving even a year, let alone a couple of generations. It's different with art's classic survivors: even if they began, as Sophocles' and Shakespeare's did, as works of popular art, they set themselves apart in their durable appeal: nothing kills them. Audiences keep coming back to experience these original works themselves.
Against the idea of permanent aesthetic values is cultural relativism, which is taught as the default orthodoxy in many university departments. Aesthetic values have been widely construed by academics as merely contingent reflections of local social and economic conditions. Beauty, if not in the eye of the beholder, has been misconstrued as merely in the eyes of society, a conditioning that determines values of cultural seeing. Such veins of explanation often include no small amount of cynicism: why do people go to the opera? Oh, to show off their furs. Why are they thrilled by famous paintings? Because they're worth millions. Beneath such explanations is a denial of intrinsic aesthetic merit.
Such aesthetic relativism is decisively refuted, as Hume understood, by the cross-cultural appeal of a small class of art objects over centuries: Mozart packs Japanese concerts halls, as Hiroshige does Paris galleries, while new productions of Shakespeare in every major language of the world are endless. And finally, it is beginning to look as though empirical psychology is equipped to address the universality of art. For example, evolutionary psychology is being used by literary scholars to explain the persistent themes and plot devices in fiction. The rendering of faces, bodies, and landscape preferences in art is amenable to psychological investigation. The structure of musical perception is now open to experimental analysis as never before. Poetic experience can be elucidated by the insights of contemporary linguistics. None of this research promises a recipe for creating great art, but it can throw light on what we already know about aesthetic pleasure.
What's going on most days in the Metropolitan Museum and most nights at Lincoln Center involves aesthetic experiences that will be continuously revived and relived by our descendents into an indefinite future. In a way, this makes the creations of the greatest artists as much permanent achievements as the discoveries of greatest scientists. That much I think I know. The question we should now ask is, What makes this possible? What is it about the highest works of art that gives them eternal appeal?
As a Christian monotheist, I start with two unproven axioms:
Together, these axioms imply my surest
conviction: that some of my beliefs (and yours) contain error.
We are, from dust to dust, finite and fallible. We have dignity
but not deity.
This mix of faith-based humility and skepticism
helped fuel the beginnings of modern science, and it has
informed my own research and science writing. The whole truth
cannot be found merely by searching our own minds, for there
is not enough there. So we also put our ideas to the test.
If they survive, so much the better for them; if not, so
much the worse.
We're living longer, and thinking shorter.
[Disclaimer: Since I'm not a scientist, I'm not even going to attempt to take on something scientific. Rather, I want to talk about something that can't easily be measured, let alone proved.
And second, though what I'm saying may sound gloomy, I love the times we live in. There has never been a time more interesting, more full of things to explain, interesting people to meet, worthy causes to support, challenging problems to solve.]
It's all about time.
I think modern life has fundamentally and paradoxically changed our sense of time. Even as we live longer, we seem to think shorter. Is it because we cram more into each hour? Or because the next person over seems to cram more into each hour?
For a variety of reasons, everything is happening much faster and more things are happening. Change is a constant.
It used to be that machines automated work, giving us more time to do other things. But now machines automate the production of attention-consuming information, which takes our time. For example, if one person sends the same e-mail message to 10 people, then 10 people have to respond.
The physical friction of everyday life—the time it took Isaac Newton to travel by coach from London to Cambridge, the dead spots of walking to work (no iPod), the darkness that kept us from reading—has disappeared, making every minute not used productively into an opportunity cost.
And finally, we can measure more, over smaller chunks of time. From airline miles to calories (and carbs and fat grams), from friends on Friendster to steps on a pedometer, from realtime stock prices to millions of burgers consumed, we count things by the minute and the second.
Unfortunately, this carries over into how we think and plan: Businesses focus on short-term results; politicians focus on elections; school systems focus on test results; most of us focus on the weather rather than the climate. Everyone knows about the big problems, but their behavior focuses on the here and now.
I first noticed this phenomenon in a big way in the US right after 9/11, when it became impossible to schedule an appointment or get anyone to make a commitment. To me, it felt like Russia (where I had been spending time since 1989), where people avoided long-term plans because there was little discernible relationship between effort and result. Suddenly, even in the US, people were behaving like the Russians of those days, reluctant to plan for anything more than a few days out.
Of course, that immediate crisis has passed, but there's still the same sense of unpredictability dogging our thinking in the US (in particular). Best to concentrate on the current quarter, because who knows what job I'll have next year. Best to pass that test, because what I actually learn won't be worth much ten years from now anyway.
How can we reverse this?
It's a social problem, but I think it may also herald a mental one—which I describe as mental diabetes.
Whatever's happening to adults, most of us grew up reading books (at least occasionally) and playing with "uninteractive" toys that required us to make up our own stories, dialogue and behavior for them. Today's children are living in an information-rich, time-compressed environment that often seems to replace a child's imagination rather than stimulate it. I posit that being fed so much processed information—video, audio, images, flashing screens, talking toys, simulated action games—is akin to being fed too much processed, sugar-rich food. It may seriously mess up children's information metabolism and their ability to process information for themselves. In other words, will they be able to discern cause and effect, to put together a coherent story line, to think scientifically?
I don't know the answers, but these questions are worth thinking about, for the long term.
I've spent two decades of my professional
life studying human mating. In that time, I've documented
phenomena ranging from what men and women desire in a mate
to the most diabolical forms of sexual treachery. I've discovered
the astonishingly creative ways in which men and women deceive
and manipulate each other. I've studied mate poachers, obsessed
stalkers, sexual predators, and spouse murderers. But throughout
this exploration of the dark dimensions of human mating,
I've remained unwavering in my belief in true love.
believe nothing to be true (clearly real) if it cannot be
In fact I will use clarity (as in "clear reality"), in the place of truth.
I will also invent equivalents for proof and for belief. Proof will be interchangeable with "experimental scientific evidence". Belief is more tricky given that it has to do with complex carbonic life. It can be interchangeable with "theoretical assessment" or "assessment by common sense" (depending on the scale and the available technology). In this process (no doubt in a path full of traps and pitfalls) I have cannibalized the original question to the following:
Now this is hard: there are many theoretical assessments for the explanation of the natural phenomena at the extreme energy scales (from the subnuclear to the supercosmic), that possess a degree of clarity. But all of them are inspired by the vast collection of conciliatory data that scale by scale speak of Nature's works. This is so even for string theory.
So the answer is still...nothing.
Following Bohr's complementarity I would spot that belief and proof are in some way complementary: if you believe you don't need proof, and (arguably) if you have proof you don't need to believe.(I would assign the hard-core string theorists who do not really care about experimental scientific evidence in the first category).
But Edge wants us to identify the equivalent(s) of the general theory of relativity in today's scientific thinking(s). Or a prediction of what are the big things in science that come at us unexpectedly. In my field, even frameworks that explain the world using extra dimensions of space (in extreme versions) are not unexpected. As a matter of fact we are preparing to discover or exclude them using the data. My hunch (and wish) is that in the laboratory we will be able to segment spacetime so finely that gravity will be studied and understood in a controlled environment, and that gravitational particle physics will be a new field.
Life is ubiquitous throughout the universe. Life on our planet earth most likely is the result of a panspermic event (a notion popularized by the late Francis Crick).
DNA, RNA and carbon based life will be found wherever we find water and look with the right tools. Whether we can prove life happens, depends on our ability to improve remote sensing and to visit faraway systems. This will also depend on whether we survive as a species for a sufficient period of time. As we have seen recently in the shotgun sequencing of the Sargasso Sea, when we look for life here on Earth with new tools of DNA sequencing we find life in abundance in the microbial world. In sequencing the genetic code of organisms that survive in the extremes of zero degrees C to well over boiling water temperatures we begin to understand the breadth of life, including life that can thrive in extremes of caustic conditions of strong acids to basic pH's that would rapidly dissolve human skin. Possible indicators of panspermia are the organisms such as Deinococcus radiodurans, which can survive millions of RADs of ionizing radiation and complete desiccation for years or perhaps millennia. These microbes can repair any DNA damage within hours of being reintroduced into an aqueous environment.
Our human centric view of life is clearly unwarranted. From the millions of genes that we have just discovered in environmental organisms over the past months we learn that a finite number of themes are used over and over again and could have easily evolved from a few microbes arriving on a meteor or on intergalactic dust. Panspermia is how life is spreads throughout the universe and we are contributing to it from earth by launching billions of microbes into space.
I believe that life is common throughout the universe and that we will find another Earth-like planet within a decade.
The mathematics alone ought to be proof to most people (billions of galaxies with billions of stars in each galaxy and around most of those stars are planets). The numbers suggest that for life not to exist elsewhere in the universe is the unlikely scenario. But there is more to this idea than a good chance. We've now found more than 130 planets just looking at nearby stars in our tiny little corner of the Milky Way. The results suggest there are uncountable numbers of planets in our galaxy alone. Some of them are likely to be earthlike, or at least earth-sized, although the vast majority that we've found so far are huge gas giants like Jupiter and Saturn which are unlikely to harbor life. Furthermore, there were four news events this year that made the discovery of life elsewhere extraordinarily more likely.
First, the NASA Mars Rover called Opportunity found incontrovertible evidence that a briny--salty-sea once covered the area where it landed, called Meridiani Planum. The only question about life on Mars now is whether that sea—which was there twice in Martian history—existed long enough for life to form. The Phoenix mission in 2008 may answer that question.
a team of astrophysicists reported in July that radio emissions
from Sagittarius B2, a nebula near the center of the Milky
Way, indicate the presence of aldehyde molecules, the prebiotic
stuff of life. Aldehydes help form amino acids, the fundamental
components of proteins. The same scientists previously reported
clouds of other organic molecules in space, including glycolaldehyde,
a simple sugar. Outer space is thus full of complex molecules—not
just atoms—necessary for life. Comets in other solar
systems could easily deposit such molecules on planets, as
they may have done in our solar system with earth.
Fourth, astronomers are not only getting good at finding new planets around other stars, they're getting the resolution of the newest telescopes so good that they can see the dim light from some newly found planets. Meanwhile, even better telescopes are being built, like the large binocular scope on Mt. Graham in Arizona that will see more planets. With light we can analyze the spectrum a new planet reflects and determine what's on that planet—like water. Water, we also discovered recently is abundant in space in large clouds between and near stars.
So everything life needs is out there. For it not to come together somewhere else as it did on earth is remarkably unlikely. In fact, although there are Goldilocks zones in galaxies where life as we know it is most likely to survive (there's too much radiation towards the center of the Milky Way, for example), there are almost countless galaxies out there where conditions could be ripe for life to evolve. This is a golden age of astrophysics and we're going to find life elsewhere.
My argument is not based so much on the scientific evidence—because there isn't very much of it, and what little there is has either found no effect or is statistically dubious. Instead, it is based on a historical analogy with previous scares about overhead power lines and cathode-ray computer monitors (VDUs). Both were also thought to be dangerous, yet years of research—decades in the case of power lines—failed to find conclusive evidence of harm.
phones seem to me to be the latest example of what has become
a familiar pattern: anecdotal evidence suggests that a technology
might be harmful, and however many studies fail to find evidence
of harm, there are always calls for more research.
Physicist and Nobel Laureate; Director Emeritus, Fermilab; Coauthor, The God Particle
My friend, the theoretical physicist, believed so strongly in String Theory, "It must be true!" He was called to testify in a lawsuit, which contested the claims of String Theory against Quantum Loop Gravity. The lawyer was skeptical. "What makes you such an authority?" he asked. "Oh, I am without question the world's most outstanding theoretical physicist", was the startling reply. It was enough to convince the lawyer to change the subject. However, when the witness came off the stand, he was surrounded by protesting colleagues.
"How could you make such an outrageous claim?" they asked. The theoretical physicist defended, "Fellows, you just don't understand; I was under oath."
To believe without knowing it cannot be proved (yet) is the essence of physics. Guys like Einstein, Dirac, Poincaré, etc. extolled the beauty of concepts, in a bizarre sense, placing truth at a lower level of importance. There are enough examples that I resonated with the arrogance of my theoretical masters who were in effect saying that God, a.k.a. the Master, Der Alte, may have, in her fashioning of the universe, made some errors in favoring of a convenient truth over a breathtakingly wondrous mathematics. This inelegant lack of confidence has heretofore always proved hasty. Thus, when the long respected law of mirror symmetry was violated by weakly interacting but exotic particles, our pain at the loss of simplicity and harmony was greatly alleviated by the discovery of the failure of particle-antiparticle symmetry. The connection was exciting because the simultaneous reflection in a mirror and change of particles to antiparticles seemed to restore a new and more powerful symmetry—"CP" symmetry now gave us a connection of space (mirror reflection) and electric charge. How silly of us to have lost confidence in the essential beauty of nature!
The renewed confidence remained even when it turned out that "CP" was also imperfectly respected. "Surely," we now believe, "there is in store some spectacular, new, unforeseen splendor in all of us." She will not let us down. This we believe, even though we can't prove it.
There is no such thing as the paranormal and the supernatural; there is only the normal and the natural and mysteries we have yet to explain.
What separates science from all other human activities is its belief in the provisional nature of all conclusions. In science, knowledge is fluid and certainty fleeting. That is the heart of its limitation. It is also its greatest strength. There are, from this ultimate unprovable assertion, three additional insoluble derivatives.
In conclusion, I believe,
but cannot prove...that reality exists and science is the
best method for understanding it, there is no God, the
universe is determined but we are free, morality evolved
as an adaptive trait of humans and human communities, and
that ultimately all of existence is explicable through
Money Manager and Science Philanthropist
The great breakthrough will involve a new understanding of time...that moving through time is not free, and that consciousness itself will be seen to only be a time sensor, adding to the other sensors of light and space.
When I first read your question, I was sure it was a trick—after all, almost nothing I believe in I can prove. I believe the earth is round, but I cannot prove it, nor can I prove that the earth revolves around the sun or that the naked fig tree in the garden will have leaves in a few months. I can't prove quarks exist or that there was a Big Bang—all of these and millions of other beliefs are based on faith in a community of knowledge whose proofs I am willing to accept, hoping they will accept on faith the few measly claims to proof I might advance.
But then I realized—after reading some of the early postings—that every one else has assumed implicitly that the "you" in: "even if you cannot prove it" referred not to the individual respondent, but to the community of knowledge—it actually stood for "one" rather than for "you". That everyone seems to have understood this seems to me a remarkable achievement, a merging of the self with the collective that only great religions and profound ideologies occasionally achieve.
So what do I believe that no one else can prove? Not much, although I do believe in evolution, including cultural evolution, which means that I tend to trust ancient beliefs about good and bad, the sacred and the profane, the meaningful and the worthless—not because they are amenable to proof, but because they have been selected over time and in different situations, and therefore might be worthy of belief.
As to the future, I will follow the cautious weather forecaster who announces: "Tomorrow will be a beautiful day, unless it rains." In other words, I can see all sorts of potentially wonderful developments in human consciousness, global solidarity, knowledge and ethics; however, there are about as many trends operating towards opposite outcomes: a coarsening of taste, reduction to least common denominator, polarization of property, power, and faith. I hope we will have the time and opportunity to understand which policies lead to which outcomes, and then that we will have the motivation and the courage to implement the more desirable alternatives.
Quantum mechanics must then be an approximate description of a more fundamental physical theory. There must then be hidden variables, which are averaged over to derive the approximate, probabilistic description which is quantum theory. We know from the experimental falsifications of the Bell inequalities that any theory which agrees with quantum mechanics on a range of experiments where it has been checked must be non-local. Quantum mechanics is non-local, as are all proposals for replacing it with something that makes more sense. So any additional hidden variables must be non-local. But I believe we can say more. I believe that the hidden variables represent relationships between the particles we do see, which are hidden because they are non-local and connect widely separated particles.
This fits in with another core belief of mine, which derives from general relativity, which is that the fundamental properties of physical entities are a set of relationships, which evolve dynamically. There are no intrinsic, non-relational properties, and there is no fixed background, such as Newtonian space and time, which exists just to give things properties.
One consequence of this is that the geometry of space and time is also only an approximate, emergent description, applicable only on scales too large to see the fundamental degrees of freedom. The fundamental relations are non-local with respect to the approximate notion of locality that emerges at the scale where it becomes sensible to talk about things located in a geometry.
Putting these together, we see that quantum uncertainty must be a residue of the resulting non-locality, which restricts our ability to predict the future of any small region of the universe. Hbar, the fundamental constant of quantum mechanics that measures the quantum uncertainty, is related to N, the number of degrees of freedom in the universe. A reasonable conjecture is that hbar is proportional to the inverse of the square root of N.
But how are we to describe physics, if it is not in terms of things moving in a fixed spacetime? Einstein struggled with this, and my only answer is the one he came to near the end of his life: fundamental physics must be discrete, and its description must be in terms of algebra and combinatorics.
Finally, what of time? I have been also unable to make sense of any of the proposals to do away with time as a fundamental aspect of our description of nature. So I believe in time, in the sense of causality. I also doubt that the "big bang" is the beginning of time, I strongly suspect that our history extends backwards before the big bang.
I believe that in the near future, we will be able to make
predictions based on these ideas that will be tested in real
believe that that systems of self-interested agents can make
progress on their own without centralized supervision.
And scientists will understand why we can't force ourselves to fall asleep or to "be creative"—and how those two facts are related. They'll understand why so many people report being most creative while driving, shaving or doing some other activity that keeps the mind's foreground occupied and lets it approach open problems in a "low focus" way. In short, they'll understand the mind as an integrated dynamic process that changes over a day and a lifetime, but is characterized always by one continuous spectrum.
Here's what we know about the cognitive spectrum: every human being traces out some version of the spectrum every day. You're most capable of analysis when you are most awake. As you grow less wide-awake, your thinking grows more concrete. As you start to fall asleep, you begin to free associate. (Cognitive psychologists have known for years that you begin to dream before you fall asleep.) We know also that to grow up intellectually means to trace out the cognitive spectrum in reverse: infants and children think concretely; as they grow up, they're increasingly capable of analysis. (Not incidentally, newborns spend nearly all their time asleep.)
Here's what we suspect about the cognitive spectrum: as you move down-spectrum, as your thinking grows less analytic and more concrete and finally bottoms on the wholly non-logical, highly concrete type of thought we call dreaming, emotions function increasingly as the "glue" of thought. I can't prove (but I believe) that "emotion coding" explains the problem of analogy. Scientists and philosophers have knocked their head against this particular brick wall for years: how can people say "a brick wall and a hard problem seem wholly different yet I can draw an analogy between them?" If we knew that, we'd understand the essence of creativity. The answer is: we are able to draw an analogy between two seemingly unlike things because the two are associated in our minds with the same emotion. And that emotion acts as a connecting bridge between them. Each memory comes with a characteristic emotion; similar emotions allow us to connect two otherwise-unlike memories. An emotion (NB!) isn't the crude, simple thing we make it out to be in speaking or writing—"happy," "sad," etc.; an emotion can be the delicate, complex, nuanced, inexpressible feeling you get on the first warm day in spring.
And here's what we don't know: what's the physiological mechanism of the cognitive spectrum? What's the genetic basis? Within a generation, we'll have the answers.
I believe neuroscientists will never have enough understanding of the neural code, the secret language of the brain, to read peoples' thoughts without their consent.
The neural code is the software, algorithm, or set of rules whereby the brain transforms raw sensory data into perceptions, memories, decisions, meanings. A complete solution to the neural code could, in principle, allow scientists to monitor and manipulate minds with exquisite precision; you might, for example, probe the mind of a suspected terrorist for memories of past attacks or plans for future ones. The problem is, although all brains operate according to certain general principles, each person's neural code is to a certain extent idiosyncratic, shaped by his or her unique life history.
The neural pattern that underpins my concept of "George Bush" or "Heathrow Airport" or "surface-to-air missile" differs from yours. The only way to know how my brain encodes this kind of specific information would be to monitor its activity—ideally with thousands or even millions of implanted electrodes, which can detect the chatter of individual neurons—while I tell you as precisely as possible what I am thinking. But data you glean from studying me will be of no use for interpreting the signals of any other person. For ill or good, our minds will always remain hidden to some extent from Big Brother.
This is possible because "illness" is a response. A rise in body temperature, for example, kills many bacteria and changes the membrane properties of cells so viruses cannot replicate. The pain of a broken bone or weak heart makes sure we let it heal or rest. Nature supplied our bodies in this way with a first-aid kit but unfortunately like many medicines their "treatments" are unpleasant. That unpleasantness, not the dysfunction which they seek to remedy is what we call "illness".
These remedies, however, have costs as well as benefits making it often difficult for the body to know whether to deploy them. A fever might fight an infection but if the body lacks sufficient energy stores, the fever might kill. The body therefore must make a decision whether the gain of clearing the infection merits the risk. Complicating that decision is that the body is blind, for example, to whether it faces a mild or a life-threatening virus. The body thus deploys its treatments in a precautionary manner. If only one in ten fevers actually clears an infection that would kill, it makes sense to tolerate the cost of the other nine. Most of the body's capacities for fighting disease and repairing injury are deployed in this precautionary way. We feel pain in a broken limb so we treat it over protectively—in nine occasions out of ten we could get by with less protective pain but on the tenth it stops us causing it further injury. But precautionary deployment is costly. Evolution therefore has put the evaluation of such deployment under the control of the brain in attempt to keep their use to a minimum.
But the brain on its own often lacks the experience to know our own condition. Fortunately, other people can, particularly those that have studied health and illness.
Human evolution therefore changed illness by offloading decisions about deployment whenever possible on to professionals. People that make themselves experienced in disease and injury, after all, have the background knowledge to know our bodies much better than ourselves. Healing professionals—healers, shamans, witch doctors and medics—exist in all human cultures. Of course, such professionals were seen by their patients as offering real treatments—and a few did help such as advising rest, eating well and some medicinal herbs. But most of what they did was ineffective. Doctors indeed had to wait until 1908 and Paul Ehrlich's discovery of Salvarsan for treating syphilis before they had a really effective treatment for a major disease. Nonetheless earlier doctors and healers were considered by themselves and their patients to be in the possession of very powerful cures.
Why? The answer I believe was that their ineffective rituals and potions actually worked. Evolution prepared us to offload control of our abilities to fight disease and heal injuries to those that knew more than us. The rituals and quackery of healers might have not worked but they certainly made a patient feel they were in the hands of an expert. That gave a healer great power over their patient. As noted, many of the body's own "treatments" are used on a precautionary basis so they can be stopped without harm. A healer could do this by applying an impressive "cure" that persuaded the body that its own "treatments" were no longer needed. The body would trust its healer and halt its own efforts and so the "illness". The patient as a result would feel much better, if not cured. Human evolution therefore made doctoring more than just a science and a question of prescribing the right treatment. It made it also an art by which a doctor persuades the patient's body to offload its decision making onto them.
The first story will be about global integration, about the dynamical self-organization of long-range binding operations in the human brain. It will probably involve something like synchrony in multiple frequency bands, and will let us understand how a unified model of the world can emerge in our own heads.
The second story will be about "transparency": Why is it that we are unable to consciously experience most of the images our brain generates as images? The answer to this question will give us a real world. The transparency-tale has to do with not being able to see earlier processing stages and becoming a naive realist.
The third story will focus on the Now, the emergence of a psychological moment—on a deeper understanding of what William James' called the "specious present". Experts on short term memory and neural network modelers with tell this story for us. As it unfolds, it will explain the emergence of a subjective present and let us understand how conscious experience, in its simplest and most essential form, is the presence of a world.
Interestingly, today almost everybody in the consciousness community already agrees on some version of the fourth story: Consciousness is directly linked to attentional processing, more precisely, to a hidden mechanism constantly holding information available for attention. The subjective presence of a world is a clever strategy of making integrated information available for attention.
I believe, but cannot prove, that this will allow us to find the global neural correlate for consciousness. However, being a philosopher, I want much more than that—I am also interested in precise concepts. What I will be waiting for is the young mathematician who then comes along and suddenly allows us to see how all of these four stories were actually only one: The genius who gives us a formal model describing the information flow in this neural correlate, and in just the right way. She will harvest the fruits of generations or researchers before her, and this will be the First Breakthrough on Consciousness.
Then three things will happen.
When considering this question one has to remember the basis of the scientific method: formulating hypotheses that can be disproved. Those hypotheses that are not disproved are thought to be true until disproved. Since it is more glamorous for a scientist to formulate hypotheses that it is to spend years disproving existing ones from other scientists and that it is unlikely that someone will spend enough time and energy trying to disprove his/her own statements, our body of scientific knowledge is surely full of statements we believe to be true but will eventually be proved to be false.
So I turn the question around: What scientific ideas that have not been disproved, do you believe are false.
In my field (theoretical economics), I believe that most ideas taught in economics 101 will be proved false eventually. Most of them would already have been officially defined as false in any other more hard-science, but, because of lack of better hypotheses they are still widely accepted and used in economics and general commentary. Eventually, someone will come up with another type of hypotheses explaining (and predicting) the economic reality in a way that will render most existing economics beliefs false.
will find ways to circumvent the speed of light as a limit
on the communication of information.
Is there a fourth law of thermodynamics, or some cousin of it, concerning self constructing non equilibrium systems such as biospheres anywhere in the cosmos?
I like to think there may be such a law.
Consider this, the number of possible proteins 200 amino acids long is 20 raised to the 200th power or about 10 raised to the 260th power. Now, the number of particles in the known universe is about 10 to the 80th power. Suppose, on a microsecond time scale the universe were doing nothing other than producing proteins length 200. It turns out that it would take vastly many repeats of the history of the universe to create all possible proteins length 200. This means that, for entities of complexity above atoms, such as modestly complex organic molecules, proteins, let alone species, automobiles and operas, the universe is on a unique trajectory (ignoring quantum mechanics for the moment). That is, the universe at modest levels of complexity and above is vastly non-ergodic.
Now conceive of the "adjacent possible", the set of entities that are one "step" away from what exists now. For chemical reaction systems, the adjacent possible from a set of compounds already existing (called the "actual" ) is just the set of novel compounds that can be produced by single chemical reactions among the initial "actual" set. Now, the biosphere has expanded into its molecular adjacent possible since 4.8 billion years ago.
Before life, there were perhaps a few hundred organic molecule species on the earth. Now there are perhaps a trillion or more. We have no law governing this expansion into the adjacent possible in this non-ergodic process. My hoped for law is that biospheres everywhere in the universe expand in such a way that they do so as fast as is possible while maintaining the rough diversity of what already exists. Otherwise stated, the diversity of things that can happen next increases on average as fast as it can.
If computers are made up of hardware and software, transistors and resistors, what are neural machines we know as minds made up of?
Minds clearly are not made up of transistors and resistors, but I firmly believe that at least one of the most basic elements of computation is shared by man and machine: the ability to represent information in terms of an abstract, algebra-like code.
In a computer, this means that software is made up of hundreds, thousands, even millions of lines that say things like IF X IS GREATER THAN Y, DO Z, or CALCULATE THE VALUE OF Q BY ADDING A, B, AND C. The same kind of abstraction seems to underlie our knowledge of linguistics. For instance, the famous linguistic dictum that a Sentence consists of a Noun Phrase plus a Verb Phrase can apply to an infinite number of possible nouns and verbs, not just a few familiar words. In its open-endedness, it is an example of mental algebra par excellence.
In my lab, we discovered that even infants seem to be able to grasp something quite similar. For example, in the course of just two minutes, a seven-month-old baby can extract the ABA "grammar" inherent in set of made-up sentences like la ta la, ga na ga, je li je. Or the ABB "grammar" in sentences like la ta ta, ga na na, je li li.
Of course, this experiment doesn't prove that there is an "algebra" circuit in the brain—psychological techniques alone can't do that. For final proof, we'll need neuroscientific techniques far more sophisticated than contemporary brain imaging, such that we can image the brain at the level of interactions between individual neurons. But every bit of evidence that we can collect now—from babies, from toddlers, from adults, from psychology and from linguistics—seems to confirm the idea that algebra-like abstraction is a fundamental component of thought.
I believe it is true that if there is intelligent life elsewhere in the universe, of whatever form, it will be familiar with the same concept of counting numbers.
philosophers believe that pure mathematics is human-specific
and that it is possible for an entirely different type of mathematics
to emerge from a different type of intelligence, a type of
mathematics that has nothing in common with ours and may even
contradict it. But it is difficult to think of what sort of
life-form would not need the counting numbers. The stars in
the sky are discrete points and cry out to be counted by beings
throughout the universe, but alien life-forms may not have
But sooner or later, whether it is to measure the passing of time, the magnitude of distance, the density of one Jovian being compared with another, numbers will have to be used. And if numbers are used, 2 + 2 must always equal 4, the number of stars in the Pleiades brighter than magnitude 5.7 will always be 11 which will always be a prime number, and two measurements of the speed of light in any units in identical conditions will always be identical. Of course, the fact that I find it difficult to think of beings which won't need our sort of mathematics doesn't mean they don't exist, but that's what I believe without proof.
There is no God that has existence apart from people's thoughts of God. There is certainly no Being that can simply suspend the (nomological) laws of the universe in order to satisfy our personal or collective yearnings and whims—like a stage director called on to change and improve a play. But there is a mental (cognitive and emotional) process common to science and religion of suspending belief in what you see and take for obvious fact. Humans have a mental compulsion—perhaps a by-product of the evolution of a hyper-sensitive reasoning device to serve our passions—to situate and understand the present state of mundane affairs within an indefinitely extendable and overarching system of relations between hitherto unconnected elements. In any event, what drives humanity forward in history is this quest for non-apparent truth.
In 1936, shortly after the outbreak of the Spanish Civil War, the moribund philosopher Miguel de Unamuno, author of the classic existential text Tragic Sense of Life, died alone in his office of heart failure at the age of 72.
Unamuno was no religious sentimentalist. As a rector and Professor of Greek at the University of Salamanca, he was an advocate of rationalist ideals and even died a folk hero for openly denouncing Francisco Franco's fascist regime. He was, however, ridden with a 'spiritual' burden that troubled him nearly all his life. It was the problem of death. Specifically, the problem was his own death, and what, subjectively, it would be "like" for him after his own death: "The effort to comprehend it causes the most tormenting dizziness." I've taken to calling this dilemma "Unamuno's paradox" because I believe that it is a universal problem. It is, quite simply, the materialist understanding that consciousness is snuffed out by death coming into conflict with the human inability to simulate the psychological state of death.
Of course, adopting a parsimonious stance allows one to easily deduce that we as corpses cannot experience mental states, but this theoretical proposition can only be justified by a working scientific knowledge (i.e., that the non-functioning brain is directly equivalent to the cessation of the mind). By stating that psychological states survive death, or even alluding to this possibility, one is committing oneself to a radical form of mind-body dualism. Consider how bizarre it truly is: Death is seen as a transitional event that unbuckles the body from its ephemeral soul, the soul being the conscious personality of the decedent and the once animating force of the now inert physical form. This dualistic view sees the self as being initially contained in bodily mass, as motivating overt action during this occupancy, and as exiting or taking leave of the body at some point after its biological expiration. So what, exactly, does the brain do if mental activities can exist independently of the brain? After all, as John Dewey put it, mind is a verb, not a noun.
And yet this radicalism is especially common. In the United States alone, as much as 95% of the population reportedly believes in life after death. How can so many people be wrong? Quite easily, if you consider that we're all operating with the same standard, blemished psychological hardware. It's tempting to argue, as Freud did, that it's just people's desire for an afterlife that's behind it all. But it would be a mistake to leave it at that. Although there is convincing evidence showing that emotive factors can be powerful contributors to people's belief in life after death, whatever one's motivations for rejecting or endorsing the idea of an immaterial soul that can defy physical death, the ability to form any opinion on the matter would be absent if not for our species' expertise at differentiating unobservable minds from observable bodies.
But here's the rub. The materialist version of death is the ultimate killjoy null hypothesis. The epistemological problem of knowing what it is "like" to be dead can never be resolved. Nevertheless, I think that Unamuno would be proud of recent scientific attempts to address the mechanics of his paradox. In a recent study, for example, I reported that when adult participants were asked to reason about the psychological abilities of a protagonist who had just died in an automobile accident, even participants who later classified themselves as "extinctivists" (i.e., those who endorsed the statement "what we think of as the 'soul,' or conscious personality of a person, ceases permanently when the body dies") nevertheless stated that the dead person knew that he was dead. For example, when asked whether the dead protagonist knew that he was dead (a feat demanding, of course, ongoing cognitive abilities), one young extinctivist's answer was almost comical. "Yeah, he'd know, because I don't believe in the afterlife. It is non-existent; he sees that now." Try hard as he might to be a good materialist, this subject couldn't help but be a dualist.
How do I explain these findings? Like reasoning about one's past mental states during dreamless sleep or while in other somnambulistic states, consciously representing a final state of non-consciousness poses formidable, if not impassable, cognitive constraints. By relying on simulation strategies to derive information about the minds of dead agents, you would in principle be compelled to "put yourself into the shoes" of such organisms, which is of course an impossible task. These constraints may lead to a number of telltale errors, namely Type I errors (inferring mental states when in fact there are none), regarding the psychological status of dead agents. Several decades ago, the developmental psychologist Gerald Koocher described, for instance, how a group of children tested on death comprehension reflected on what it might be like to be dead "with references to sleeping, feeling 'peaceful,' or simply 'being very dizzy.'" More recently, my colleague David Bjorklund and I found evidence that younger children are more likely to attribute mental states to a dead agent than are older children, which is precisely the opposite pattern that one would expect to find if the origins of such beliefs could be traced exclusively to cultural learning.
It seems that the default cognitive stance is reasoning that human minds are immortal; the steady accretion of scientific facts may throw off this stance a bit, but, as Unamuno found out, even science cannot answer the "big" question. Don't get me wrong. Like Unamuno, I don't believe in the afterlife. Recent findings have led me to believe that it's all a cognitive illusion churned up by a psychological system specially designed to think about unobservable minds. The soul is distinctly human all right. Without our evolved capacity to reason about minds, the soul would never have been. But in this case, the proof isn't in the empirical pudding. It can't be. It's death we're talking about, after all.
believe, but can't prove, that human language evolved from a
combination of gesture and innate vocalizations, via the concomitant
evolution of mirror neurons, and that birds will provide the
best model for language evolution.
We are good at fitting explanations to the past, all the while living in the illusion of understanding the dynamics of history.
My claim is about the severe overestimation of knowledge in what I call the " ex post" historical disciplines, meaning almost all of social science (economics, sociology, political science) and the humanities, everything that depends on the non-experimental analysis of past data. I am convinced that these disciplines do not provide much understanding of the world or even their own subject matter; they mostly fit a nice sounding narrative that caters to our desire (even need) to have a story. The implications are quite against conventional wisdom. You do not gain much by reading the newspapers, history books, analyses and economic reports; all you get is misplaced confidence about what you know. The difference between a cab driver and a history professor is only cosmetic as the latter can express himself in a better way.
There is convincing but only partial empirical evidence of this effect. The evidence can only be seen in the disciplines that offer both quantitative data and quantitative predictions by the experts, such as economics. Economics and finance are an empiricist's dream as we have a goldmine of data for such testing. In addition there are plenty of "experts", many of whom make more than a million a year, who provide forecasts and publish them for the benefits of their clients. Just check their forecasts against what happens after. Their projections fare hardly better than random, meaning that their "stories" are convincing, beautiful to listen to, but do not seem to help you more than listening to, say, a Chicago cab driver. This extends to inflation, growth, interest rates, balance of payment, etc. (While someone may argue that their forecasts might impact these variables, the mechanism of "self-canceling prophecy" can be taken into account). Now consider that we depend on these people for governmental economic policy!
This implies that whether or not you read the newspapers will not make the slightest difference to your understanding of what can happen in the economy or the markets. Impressive tests on the effect of the news on prices were done by the financial empiricist Victor Niederhoffer in the 60s and repeated throughout with the same results.
If you look closely at the data to check the reasons of this inability to see things coming, you will find that these people tend to guess the regular events (though quite poorly); but they miss on the large deviations, these " unusual" events that carry large impacts. These outliers have a disproportionately large contribution to the total effect.
Now I am convinced, yet cannot prove it quantitatively, that such overestimation can be generalized to anything where people give you a narrative-style story from past information, without experimentation. The difference is that the economists got caught because we have data (and techniques to check the quality of their knowledge) and historians, news analysts, biographers, and "pundits" can hide a little longer. Basically historians might get a small trend here and there, but they did miss on the big events of the past centuries and, I am convinced, will not see much coming in the future. It was said: "the wise see things coming". To me the wise persons are the ones who know that they can't see things coming.
I believe the human race will never decide that an advanced computer possesses consciousness. Only in science fiction will a person be charged with murder if they unplug a PC. I believe this because I hold, but cannot yet prove, that in order for an entity to be consciousness and possess a mind, it has to be a living being.
alive, of course, does not guarantee the presence of a mind.
For example, a plant carries on the necessary metabolic functions
to be alive, but still does not possess a mind. A chimpanzee,
on the other hand, is a different story. All the behavioral features
we share with chimps in addition to life, such as intelligence,
the ability to deceive, mirror self-recognition, some individual
social identity, make chimps seem so much like us that many in
the scientific community intuitively grant chimps "beinghood" and
It is about the anticipation of the moment and the memory of the moment, but not the moment.
In German there is a beautiful little word for it: "Vorfreude", which still is a shade different from "delight" or "pleasure" or even "anticipation". It is the "Pre-Delight", the "Before-Joy", or as a little linguistic concoction: the "ForeFun"; in a single word trying to express the relationship of time, the pleasure of waiting for the moment to arrive, the can't wait moments of elation, of hoping for some thing, some one, some event to happen.
Whether it's on a small scale like that special taste of your favorite food, waiting to see a loved one, that one moment in a piece of music, a sequence in a movie....or the larger versions: the expectation of a beautiful vacation, the birth of a baby, your acceptance of an Oscar.
We have been told by wise men, Dalais and Maharishis that it is supposedly all about those moments, to cherish the second it happens and never mind the continuance of time...
But for me, since early childhood days, I realized somehow: the beauty lies in the time before, the hope for, the waiting for, the imaginary picture painted in perfection of that instant in time. And then, once it passes, in the blink of an eye, it will be the memory which really stays with you, the reflection, the remembrance of that time. Cherish the thought..., remember how....
Nothing ever is as beautiful as its abstraction through the rose-colored glasses of anticipation...The toddlers hope for Santa Claus on Christmas eve turns out to be a fat guy with a fashion issue. Waiting for the first kiss can give you waves of emotional shivers up your spine, but when it then actually happens, it's a bunch of molecules colliding, a bit of a mess, really. It is not the real moment that matters. In Anticipation the moment will be glorified by innocence, not knowing yet. In Remembrance the moment will be sanctified by memory filters, not knowing any more.
In the Zen version, trying to uphold the beauty of the moment in that moment is in my eyes a sad undertaking. Not so much because it can be done, all manner of techniques have been put forth how to be a happy human by mastering the art of it. But it also implies, by definition, that all those other moments live just as much under the spotlight: the mundane, the lame, the gross, the everyday routines of dealing with life's mere mechanics.
In the Then version, it is quite the opposite: the long phases before and after last hundreds or thousands of times longer than the moment, and drown out the everyday humdrum entirely.
Bluntly put: spend your life in the eternal bliss of always having something to hope for, something to wait for, plans not realized, dreams not come true.... Make sure you have new points on the horizon, that you purposely create. And at the same time, relive your memories, uphold and cherish them, keep them alive and share them, talk about them.
Make plans and take pictures.
I have no way of proving such a lofty philosophical theory, but I greatly anticipate the moment that I might... and once I have done it, I will, most certainly, never forget.
Second, one of our shared core systems centers on a notion that is false: the notion that members of different human groups differ profoundly in their concepts and values. This notion leads us to interpret the superficial differences between people as signs of deeper differences. It has quite a grip on us: Many people would lay down their lives for perfect strangers from their own community, while looking with suspicion at members of other communities. And all of us are apt to feel a special pull toward those who speak our language and share our ethnic background or religion, relative to those who don't.
Third, the most striking feature of human cognition stems not from our core knowledge systems but from our capacity to rise above them. Humans are capable of discovering that our core conceptions are false, and of replacing them with truer ones. This change has happened dramatically in the domain of astronomy. Core capacities to perceive, act on, and reason about the surface layout predispose us to believe that the earth is a flat, extended surface on which gravity acts as a downward force. This belief has been decisively overturned, however, by the progress of science. Today, every child who plays computer games or watches Star Wars knows that the earth is one sphere among many, and that gravity pulls all these bodies toward one another.
Together, my three beliefs suggest a fourth. If the cognitive sciences are given sufficient time, the truth of the claim of a common human nature eventually will be supported by evidence as strong and convincing as the evidence that the earth is round. As humans are bathed in this evidence, we will overcome our misconceptions of human differences. Ethnic and religious rivalries and conflicts will come to seem as pointless as debates over the turtles that our pancake earth sits upon, and our common need for a stable, sustainable environment for all people will be recognized. But this fourth belief is conditional. Our species is caught in a race between the progress of our science and the escalation both of our intergroup conflicts and of the destructive means to pursue them. Will humans last long enough for our science to win this race?
Twenty-two percent of Americans claim to be certain that Jesus will return to earth to judge the living and the dead sometime in the next fifty years. Another twenty-two percent believe that he is likely to do so. The problem that most interests me at this point, both scientifically and socially, is the problem of belief itself. What does it mean, at the level of the brain, to believe that a proposition is true? The difference between believing and disbelieving a statement—Your spouse is cheating on you; you've just won ten million dollars—is one of the most potent regulators of human behavior and emotion. The instant we accept a given representation of the world as true, it becomes the basis for further thought and action; rejected as false, it remains a string of words.
What I believe, though cannot yet prove, is that belief is a content-independent process. Which is to say that beliefs about God—to the degree that they are really believed—are the same as beliefs about numbers, penguins, tofu, or anything else. This is not to say that all of our representations of the world are acquired through language, or that all linguistic representations are on the same logical footing. And we know that different regions of the brain are involved in judging the truth-value of statements drawn from different content domains. What I do believe, however, is that the neural processes that govern the final acceptance of a statement as "true" rely on more fundamental, reward-related circuitry in our frontal lobes—probably the same regions that judge the pleasantness of tastes and odors. Truth may be beauty, and beauty truth, in more than a metaphorical sense. And false statements may, quite literally, disgust us.
Once the neurology of belief becomes clear, and it stands revealed as an all-purpose emotion arising in a wide variety of contexts (often without warrant), religious faith will be exposed for what it is: a humble species of terrestrial credulity. We will then have additional, scientific reasons to declare that mere feelings of conviction are not enough when it comes time to talk about the way the world is. The only thing that guarantees that (sufficiently complex) beliefs actually represent the world, are chains of evidence and argument linking them to the world. Only on matters of religious faith do sane men and women regularly dispute this fact. Apart from removing the principle reason we have found to kill one another, a revolution in our thinking about religious belief would clear the way for new approaches to ethics and spiritual experience. Both ethics and spirituality lie at the very heart of what is good about being human, but our thinking on both fronts has been shackled to the preposterous for millennia. Understanding belief at the level of the brain may hold the key to new insights into the nature of our minds, to new rules of discourse, and to new frontiers of human cooperation.
That our ability to perceive signals in the environment evolved directly from our bacterial ancestors. That is, we, like all other mammals including our apish brothers detect odors, distinguish tastes, hear bird song and drum beats and we too feel the vibrations of the drums. With our eyes closed we detect the light of the rising sun. These abilities to sense our surroundings are a heritage that preceded the evolution of all primates, all vertebrate animals, indeed all animals. Such sensitivities to wafting plant scents, tasty salted mixtures, police cruiser sirens, loving touches and star light register because of our "sensory cells".
These avant guard cells of the nasal passages, the taste buds, the inner ear, the touch receptors in the skin and the retinal rods and cones all have in common the presence at their tips of projections ("cell processes") called cilia. Cilia have a recognizable fine structure. With a very high power ("electron") microscope a precise array of protein tubules, nine, exactly nine pairs of tubules are arranged in a circular array and two singlet tubules are in the center of this array. All sensory cells have this common feature whether in the light-sensitive retina of the eye or the balance-sensitive semicircular canals of the inner ear. Cross-section slices of the tails of human, mouse and even insect (fruit-fly) sperm all share this same instantly recognizable structure too. Why this peculiar pattern? No one knows for sure but it provides the evolutionist with a strong argument for common ancestry. The size (diameter) of the circle (0.25 micrometers) and of the constituent tubules (0.024 micrometers) aligned in the circle is identical in the touch receptors of the human finger and the taste buds of the elephant.
What do I feel that I know, what Oscar Wilde said (that "even true things can be proved")?
Not only that the sensory cilia derive from these exact 9-fold symmetrical structures in protists such as the "waving feet" of the paramecium or the tail of the vaginal-itch protist called Trichomonas vaginalis. Indeed, all biologists agree with the claim that sperm tails and all these forms of sensory cilia share a common ancestry.
But I go much farther. I think the the common ancestor of the cilium, but not the rest of the cell, was a free-swimming entity, a skinny snake-like bacterium that, 1500 million years ago squiggled through muds in a frantic search for food. Attracted by some smells and repelled by others the bacteria, by themselves, already enjoyed a repertoire of sensory abilities that remain with their descendants to this day. In fact, this bacterial ancestor of the cilium never went extinct, rather some of its descendants are uncomfortably close to us today. This hypothetical bacterium, ancestor to all the cilia, was no ordinary rod-shaped little dot.
No, this bacterium who still has many live relatives, entered into symbiotic partnerships with other very different kinds of bacteria. Together this two component partnership swam and stuck together both persisted. What kind of bacterium became an attached symbiont that impelled its partner forward? None other than a squirming spirochete bacterium.
The spirochete group of bacteria includes many harmless mud-dwellers but it also contains a few scary freaks: the treponeme of syphilis and the borrelias of Lyme disease. We animals got our exquisite ability to sense our surroundings—to tell light from dark, noise from silence, motion from stillness and fresh water from brackish brine—from a kind of bacterium whose relatives we despise. Cilia were once free-agents but they became an integral part of all animal cells. Even though the concept that cilia evolved from spirochetes has not been proved I think it is true. Not only is it true but, given the powerful new techniques of molecular biology I think the hypothesis will be conclusively proved. In the not-too-distant future people will wonder why so many scientists were so against my idea for so long!
Why is there scientific law at all?
We physicists explain the origin and structure of matter and energy, but not the laws that do this. Does the idea of causation apply to where the laws themselves came from? Even Alan Guth's "free lunch" gives us the universe after the laws start acting. We have narrowed down the range of field theories that can yield the big bang universe we live in, but why do the laws that govern it seem to be constant in time, and always at work?
One can imagine a universe in which laws are not truly lawful. Talk of miracles does just this, when God is supposed to make things work. Physics aims to find The Laws and hopes that these will be uniquely constrained, as when Einstein wondered if God had any choice when He made the universe. One fashionable escape hatch from this asserts that there are infinitely many universes, each sealed off from the others, which can obey any sort of law one can imagine, with parameters or assumptions changed. This "multiverse" view represents the failure of our grand agenda, of course, and seems to me contrary to Occam's Razor—solving our lack of understanding by multiplying unseen entities into infinity.
Perhaps it is a similar philosophical failure of imagination to think, as I do, that when we see order, there is usually an ordering principle. But what can constrain the nature of physical law? Evolution gave us our ornately structured biosphere, and perhaps a similar principle operates in selecting universes. Perhaps our universe arises, then, from selection for intelligences that can make fresh universes, perhaps in high energy physics experiments. Or near black holes (as Lee Smiolin supposed), where space-time gets contorted into plastic forms that can make new space-times. Then an Ur-universe that had intelligence could make others, and this reproduction with perhaps slight variation ion "genetics" drives the evolution of physical law.
Selection arises because only firm laws can yield constant, benign conditions to form new life. Ed Harrison had similar ideas. Once life forms realize this, they could intentionally make more smart universes with the right, fixed laws, to produce ever more grand structures. There might be observable consequences of this prior evolution, If so, then we are an inevitable consequence of the universe, mirroring intelligences that have come before, in some earlier universe that deliberately chose to create more sustainable order. The fitness of our cosmic environment is then no accident. If we find evidence of fine-tuning in the Dyson and Rees sense, then, is this evidence for such views?
A large body of experimental findings, clinical findings, and phenomenal reports can be explained within a coherent framework by the neuronal structure and dynamics of my theoretical model. In addition, the model accurately predicts many classical illusions and perceptual anomalies. So I believe that the neuronal mechanisms and systems that I have proposed provide a true explanation for many important aspects of human cognition and phenomenal experience. But I can't prove it. Of course, competing theories about the brain, cognition, and consciousness can't be proved either. But I can't prove it. Providing the evidence is the best we can do—I think.
The first two are familiar: natural selection, which selects for fitness, and sexual selection, which selects for sexiness.
The third process selects for beauty, but not sexual beauty—not adult beauty. The ones doing the selecting weren't potential mates: they were parents. Parental selection, I call it.
What gave me the idea was a passage from a book titled Nisa: The Life and Words of a !Kung Woman, by the anthropologist Marjorie Shostak. Nisa was about fifty years old when she recounted to Shostak, in remarkable detail, the story of her life as a member of a hunter-gatherer group.
One of the incidents described by Nisa occurred when she was a child. She had a brother named Kumsa, about four years younger than herself. When Kumsa was around three, and still nursing, their mother realized she was pregnant again. She explained to Nisa that she was planning to "kill"—that is, abandon at birth—the new baby, so that Kumsa could continue to nurse. But when the baby was born, Nisa's mother had a change of heart. "I don't want to kill her," she told Nisa. "This little girl is too beautiful. See how lovely and fair her skin is?"
Standards of beauty differ in some respects among human societies; the !Kung are lighter-skinned than most Africans and perhaps they pride themselves on this feature. But Nisa's story provides a insight into two practices that used to be widespread and that I believe played an important role in human evolution: the abandonment of newborns that arrived at inopportune times (this practice has been documented in many human societies by anthropologists), and the use of aesthetic criteria to tip the scales in doubtful cases.
Coupled with sexual selection, parental selection could have produced certain kinds of evolutionary changes very quickly, even if the heartbreaking decision of whether to rear or abandon a newborn was made in only a small percentage of births. The characteristics that could be affected by parental selection would have to be apparent even in a newborn baby. Two such characteristics are skin color and hairiness.
Parental selection can help to explain how the Europeans, who are descended from Africans, developed white skin over such a short period of time. In Africa, a cultural preference for light skin (such as Nisa's mother expressed) would have been counteracted by other factors that made light skin impractical. But in less sunny Europe, light skin may actually have increased fitness, which means that all three selection processes might have worked together to produce the rapid change in skin color.
Parental selection coupled with sexual selection can also account for our hairlessness. In this case, I very much doubt that fitness played a role; other mammals of similar size—leopards, lions, zebras, gazelle, baboons, chimpanzees, and gorillas—get along fine with fur in Africa, where the change to hairlessness presumably took place. I believe (though I cannot prove it) that the transition to hairlessness took place quickly, over a short evolutionary time period, and involved only Homo sapiens or its immediate precursor.
It was a cultural thing. Our ancestors thought of themselves as "people" and thought of fur-bearing creatures as "animals," just as we do. A baby born too hairy would have been distinctly less appealing to its parents.
If I am right that the transition to hairlessness occurred very late in the sequence of evolutionary changes that led to us, then this can explain two of the mysteries of paleoanthropology: the survival of the Neanderthals in Ice Age Europe, and their disappearance about 30,000 years ago.
I believe, though I cannot prove it, that Neanderthals were covered with a heavy coat of fur, and that Homo erectus, their ancestor, was as hairy as the modern chimpanzee. A naked Neanderthal could never have made it through the Ice Age. Sure, he had fire, but a blazing hearth couldn't keep him from freezing when he was out on a hunt. Nor could a deerskin slung over his shoulders, and there is no evidence that Neanderthals could sew. They lived mostly on game, so they had to go out to hunt often, no matter how rotten the weather. And the game didn't hang around conveniently close to the entrance to their cozy cave.
The Neanderthals disappeared when Homo sapiens, who by then had learned the art of sewing, took over Europe and Asia. This new species, descended from a southern branch of Homo erectus, was unique among primates in being hairless. In their view, anything with fur on it could be classified as "animal"—or, to put it more bluntly, game. Neanderthal disappeared in Europe for the same reason the woolly mammoth disappeared there: the ancestors of the modern Europeans ate them. In Africa today, hungry humans eat the meat of chimpanzees and gorillas.
At present, I admit, there is insufficient evidence either to confirm or disconfirm these suppositions. However, evidence to support my belief in the furriness of Neanderthals may someday be found. Everything we currently know about this species comes from hard stuff like rocks and bones. But softer things, such as fur, can be preserved in glaciers, and the glaciers are melting. Someday a hiker may come across the well-preserved corpse of a furry Neanderthal.
So, science is a relationship between what we can represent and are able to think about, and "what's out there": it's an extension of good map making, most often using various forms of mathematics as the mapping languages. When we guess in science we are guessing about approximations and mappings to languages, we are not guessing about "the truth" (and we are not in a good state of mind for doing science if we think we are guessing "the truth" or "finding the truth"). This is not at all well understood outside of science, and there are unfortunately a few people with degrees in science who don't seem to understand it either.
Sometimes in math one can guess a theorem that can be proved true. This is a useful process even if one's batting average is less than .500. Guessing in science is done all the time, and the difference between what is real and what is true is not a big factor in the guessing stage, but makes all the difference epistemologically later in the process.
One corner of computing is a kind of mathematics (other corners include design, engineering, etc.). But there are very few interesting actual proofs in computing. A good Don Knuth quote is: "Beware of bugs in the above code; I have only proved it correct, not tried it."
An analogy for why this is so is to the n-body problems (and other chaotic systems behaviors) in physics. An explosion of degrees of freedom (3 bodies and gravity is enough) make a perfectly deterministic model impossible to solve analytically for a future state. However, we can compute any future state by brute force simulation and see what happens. By analogy, we'd like to prove useful programs correct, but we either have intractable degrees of freedom, or as in the Knuth quote, it is very difficult to know if we've actually gathered all the cases when we do a "proof".
So a guess in computing is often architectural or a collection of "covering heuristics". An example of the latter is TCP/IP which has allowed "the world's largest and most scalable artifact—The Internet—to be successfully built. An example of the former is the guess I made in 1966 about objects—not that one could build everything from objects—that could be proved mathematically—but that using objects would be a much better way to represent most things. This is not very provable, but like the Internet, now has quite a body of evidence that suggests this was a good guess.
Another guess I made long ago—that does not yet have a body of evidence to support it—is that what is special about the computer is analogous to and an advance on what was special about writing and then printing. It's not about automating past forms that has the big impact, but as McLuhan pointed out, when you are able to change the nature of representation and argumentation, those who learn these new ways will wind up to be qualtitatively different and better thinkers, and this will (usually) help advance our limited conceptions of civilization.
This still seems like a good guess to me—but "truth" has nothing to do with it.
I do not believe that people are capable of rational thought when it comes to making decisions in their own lives. People believe that are behaving rationally and have thought things out, of course, but when major decisions are made—who to marry, where to live, what career to pursue, what college to attend, people's minds simply cannot cope with the complexity. When they try to rationally analyze potential options, their unconscious, emotional thoughts take over and make the choice for them.
As an example of what I mean consider a friend of mine who was told to select a boat as a wedding present by his father in law. He chose a very peculiar boat which caused a real rift between him and his bride. She had expected a luxury cruiser, which is what his father in law had intended. Instead he selected a very rough boat that he could fashion as he chose. As he was an engineer his primary concern was how it would handle open ocean and he made sure the engines were special ones that could be easily gotten at and that the boat rode very low in the water. When he was finished he created a very functional but very ugly and uncomfortable boat.
Now I have ridden with him on his boat many times. Always he tells me about its wonderful features that make it a rugged and very useful boat. But, the other day, as we were about to start a trip, he started talking about how pretty he thought his boat was, how he liked the wood, the general placement of things, and the way the rooms fit together. I asked him if he was describing a boat that he had been familiar with as a child and suggested that maybe this boat was really a copy of some boat he knew as a kid. He said, after some thought, that that was exactly the case, there had been a boat like in his childhood and he had liked it a great deal.
While he was arguing with his father in law, his wife, and nearly everyone he knew about his boat, defending his decision with all the logic he could muster, destroying the very conceptions of boats they had in mind, the simple truth was his unconscious mind was ruling the decision making process. It wanted what it knew and loved, too bad for the conscious which had to figure how to explain this to everybody else.
Of course, psychoanalysts have made a living on trying to figure out why people make the decisions they do. The problem with psychoanalysis is that it purports to be able to cure people. This possibility I doubt very much. Freud was a doctor so I guess he got paid to fix things and got carried away. But his view of the unconscious basis of decision making was essentially correct. We do not know how we decide things, and in a sense we don't really care. Decisions are made for us by our unconscious, the conscious is in charge of making up reasons for those decisions which sound rational. We can, on the other hand, think rationally about the choices that other people make. We can do this because we do not know and are not trying to satisfy unconscious needs and childhood fantasies. As for making good decisions in our lives, when we do it is mostly random. We are always operating with too little information consciously and way too much unconsciously.
Neutrinos, once in thermal equilibrium, were supposedly freed from their bonds to other particles about two seconds after the Big Bang. Since then they should have been roaming undisturbed through intergalactic space, some 200 of them in every cubic centimeter of our Universe, altogether a billion of them for every single atom. Their presence is noted indirectly in the Universe's expansion. However, though they are presumably by far the most numerous type of material particle in existence, not a single one of those primordial neutrinos has ever been detected. It is not for want of trying, but the necessary experiments are almost unimaginably difficult. And yet those neutrinos must be there. If they are not, our whole picture of the early Universe will have to be totally reconfigured.
Wolfgang Pauli's original 1930 proposal of the neutrino's existence was so daring he didn't publish it. Enrico Fermi's brilliant 1934 theory of how neutrinos are produced in nuclear events was rejected for publication by Nature magazine as being too speculative. In the 1950s neutrinos were detected in nuclear reactors and soon afterwards in particle accelerators. Starting in the 1960s, an experimental tour de force revealed their existence in the solar core. Finally, in1987 a ten second burst of neutrinos was observed radiating outward from a supernova collapse that had occurred almost 200,000 years ago. When they reached the Earth and were observed, one prominent physicist quipped that extra-solar neutrino astronomy "had gone in ten seconds from science fiction to science fact". These are some of the milestones of 20th century neutrino physics.
In the 21st century we eagerly await another one, the observation of neutrinos produced in the first seconds after the Big Bang. We have been able to identify them, infer their presence, but will we be able to actually see these minute and elusive particles? They must be everywhere around us, even though we still cannot prove it.
If we are faced with a puzzling experimental result, we first try harder to understand it with currently available theory, using more clever ways to apply that theory. If that really doesn't work, we try to improve or perhaps even replace the theory. We never conclude that a not-yet understood result is in principle un-understandable.
While some philosophers might draw a different conclusion—see the contribution by Nicholas Humphrey—as a scientist I strongly believe that Nature is understandable. And such a belief can neither be proved nor disproved.
undoubtedly, the notion of what counts as "understandable" will
continue to change. What physicists consider to be understandable
now is very different from what had been regarded as such one
hundred years ago. For example, quantum mechanics tells us that
repeating the same experiment will give different results. The
discovery of quantum mechanics led us to relax the rigid requirement
of a deterministic objective reality to a statistical agreement
with a not fully determinable reality. Although at first sight
such a restriction might seem to limit our understanding, we
in fact have gained a far deeper understanding of matter through
the use of quantum mechanics than we could possibly have obtained
using only classical mechanics.
If our thoughts and consciousness do not depend on the actual substances in our brains but rather on the structures, patterns, and relationships between parts, then Tinkertoy minds could think. If you could make a copy of your brain with the same structure but using different materials, the copy would think it was you. This seemingly materialistic approach to mind does not diminish the hope of an afterlife, of transcendence, of communion with entities from parallel universes, or even of God. Even Tinkertoy minds can dream, seek salvation and bliss—and pray.
what happens? People say I'm lying! They say it's impossible
and so I must be deluding myself to preserve my theory. And what
can I do or say to challenge them? I have no idea—other
than to suggest that other people try the exercise, demanding
as it is.
When you look at some of the proofs that have been developed in the last fifty years or so, using incredibly complicated reasoning that can stretch into hundreds of pages or more, certainty is even harder to maintain. Most mathematicians (including me) believe that Andrew Wiles proved Fermat's Last Theorem in 1994, but did he really? (I believe it because the experts in that branch of mathematics tell me they do.)
In late 2002, the Russian mathematician Grigori Perelman posted on the Internet what he claimed was an outline for a proof of the Poincare Conjecture, a famous, century old problem of the branch of mathematics known as topology. After examining the argument for two years now, mathematicians are still unsure whether it is right or not. (They think it "probably is.")
Or consider Thomas Hales, who has been waiting for six years to hear if the mathematical community accepts his 1998 proof of astronomer Johannes Keplers 360-year-old conjecture that the most efficient way to pack equal sized spheres (such as cannonballs on a ship, which is how the question arose) is to stack them in the familiar pyramid-like fashion that greengrocers use to stack oranges on a counter. After examining Hales' argument (part of which was carried out by computer) for five years, in spring of 2003 a panel of world experts declared that, whereas they had not found any irreparable error in the proof, they were still not sure it was correct.
With the idea of proof so shaky—in practice—even in mathematics, answering this year's Edge question becomes a tricky business. The best we can do is come up with something that we believe but cannot prove to our own satisfaction. Others will accept or reject what we say depending on how much credence they give us as a scientist, philosopher, or whatever, generally basing that decision on our scientific reputation and record of previous work. At times it can be hard to avoid the whole thing degenerating into a slanging match. For instance, I happen to believe, firmly, that staples of popular-science-books and breathless TV-specials such as ESP and morphic resonance are complete nonsense, but I can't prove they are false. (Nor, despite their repeated claims to the contrary, have the proponents of those crackpot theories proved they are true, or even worth serious study, and if they want the scientific community to take them seriously then the onus if very much on them to make a strong case, which they have so far failed to do.)
Once you recognize that proof is, in practical terms, an unachievable ideal, even the old mathematicians standby of Gòdel's Incompleteness Theorem (which on first blush would allow me to answer the Edge question with a statement of my belief that arithmetic is free of internal contradictions) is no longer available. Gòdel's theorem showed that you cannot prove an axiomatically based theory like arithmetic is free of contradiction within that theory itself. But that doesn't mean you can't prove it in some larger, richer theory. In fact, in the standard axiomatic set theory, you can prove arithmetic is free of contradictions. And personally, I buy that proof. For me, as a living, human mathematician, the consistency of arithmetic has been proved—to my complete satisfaction.
So to answer the Edge question, you have to take a common sense approach to proof—in this case proof being, I suppose, an argument that would convince the intelligent, professionally skeptical, trained expert in the appropriate field. In that spirit, I could give any number of specific mathematical problems that I believe are true but cannot prove, starting with the famous Riemann Hypothesis. But I think I can be of more use by using my mathematician's perspective to point out the uncertainties in the idea of proof. Which I believe (but cannot prove) I have.
Professor: Well I'm glad to hear that you're interested. What did you do?
Student: I flipped this coin 1,000 times. You remember, you taught us that the probability to flip heads is one half. I figured that meant that if I flip 1,000 times I ought to get 500 heads. But it didn't work. I got 513. What's wrong?
Professor: Yeah, but you forgot about the margin of error. If you flip a certain number of times then the margin of error is about the square root of the number of flips. For 1,000 flips the margin of error is about 30. So you were within the margin of error.
Student: Ah, now I get if. Every time I flip 1,000 times I will always get something between 970 and 1,030 heads. Every single time! Wow, now that's a fact I can count on.
Professor: No, no! What it means is that you will probably get between 970 and 1,030.
Student: You mean I could get 200 heads? Or 850 heads? Or even all heads?
Professor: Probably not.
Student: Maybe the problem is that I didn't make enough flips. Should I go home and try it 1,000,000 times? Will it work better?
Student: Aw come on Prof. Tell me something I can trust. You keep telling me what probably means by giving me more probablies. Tell me what probability means without using the word probably.
Professor: Hmmm. Well how about this: It means I would be surprised if the answer were outside the margin of error.
Student: My god! You mean all that stuff you taught us about statistical mechanics and quantum mechanics and mathematical probability: all it means is that you'd personally be surprised if it didn't work?
Professor: Well, uh...
I were to flip a coin a million times I'd be damn sure I wasn't
going to get all heads. I'm not a betting man but I'd be so
sure that I'd bet my life or my soul. I'd even go the whole
way and bet a year's salary. I'm absolutely certain the laws
of large numbers—probability theory—will work and
protect me. All of science is based on it. But, I can't prove
it and I don't really know why it works. That may be the reason
why Einstein said, "God doesn't play dice." It probably is.
Well, of course, it is tempting to go for something like, "That the wheel, agriculture, and the Macarena were all actually invented by yetis." Or to do the sophomoric pseudo-ironic logic twist of, "That every truth can eventually be proven." Or to get up my hackles, draw up to my full height and intone, "Sir, we scientists believe in nothing that cannot be proven by the whetstone of science, verily our faith is our lack of faith," and then go off in a lab coat and a huff.
The first two aren't worth the words, and the third just isn't so. No matter how many times we read Arrowsmith, scientists are subjective humans operating in an ostensibly objective business, so there 's probably lots of things we take on faith.
So mine would be a fairly simple, straightforward case of an unjustifiable belief, namely that there is no god(s) or such a thing as a soul (whatever the religiously inclined of the right persuasion mean by that word). I'm very impressed, moved, by one approach of people on the other side of the fence. These are the believers who argue that it would be a disaster, would be the very work of Beelzebub, for it to be proven that god exists. What good would religiosity be if it came with a transparently clear contract, instead of requiring the leap of faith into an unknowable void?
So I'm taken with religious folks who argue that you not only can, but should believe without requiring proof. Mine is to not believe without requiring proof. Mind you, it would be perfectly fine with me if there were a proof that there is no god. Some might view this as a potential public health problem, given the number of people who would then run damagingly amok. But it's obvious that there's no shortage of folks running amok thanks to their belief. So that wouldn 't be a problem and, all things considered, such a proof would be a relief—many physicists, especially astrophysicists, seem weirdly willing to go on about their communing with god about the Big Bang, but in my world of biologists, the god concept gets mighty infuriating when you spend your time thinking about, say, untreatably aggressive childhood leukemia.
Finally, just to undo any semblance of logic here, I might even continue to believe there is no god, even if it was proven that there is one. A religious friend of mine once said to me that the concept of god is very useful, so that you can berate god during the bad times. But it is clear to me that I don't need to believe that there is a god in order to berate him.
I am a mathematician, I give a precise answer to this question.
Thanks to Kurt Gödel, we know that there are true mathematical
statements that cannot be proved. But I want a little more than
this. I want a statement that is true, unprovable, and simple
enough to be understood by people who are not mathematicians.
Here it is.
This year, researching the languages of Indonesia for an upcoming book, I happened to find out about a few very obscure languages spoken on one island that are much simpler than one would expect.
Most languages are much, much more complicated than they need to be. They take on needless baggage over the millennia simply because they can. So, for instance, most languages of Indonesia have a good number of prefixes and/or suffixes. Their grammars often force the speaker to attend to nuances of difference between active and passive much more than a European languages does, etc.
But here were a few languages that had no prefixes or suffixes at all. Nor do they have any tones, like many languages in the world. For one thing, languages that have been around forever that have no prefixes, suffixes, or tones are very rare worldwide. But then, where we do find them, they are whole little subfamilies, related variations on one another. Here, though, is a handful of small languages that contrast bizarrely with hundreds of surrounding relatives.
One school of thought in how language changes says that this kind of thing just happens by chance. But my work has been showing me that contrasts like this are due to sociohistory. Saying that naked languages like this are spoken alongside ones as bedecked as Italian is rather like saying that kiwis are flightless just "because," rather than because their environment divested them of the need to fly.
But for months I scratched my head over these languages. Why just them? Why there?
So isn't it interesting that the island these languages is spoken on is none other than Flores, which has had its fifteen minutes of fame this year as the site where skeletons of the "little people" were found. Anthropologists have hypothesized that this was a different species of Homo. While the skeletons date back 13,000 years ago or more, local legend recalls "little people" living alongside modern humans, ones who had some kind of language of their own and could "repeat back" in modern humans' language.
The legends suggest that the little people only had primitive language abilities, but we can't be sure here: to the untutored layman who hasn't taken any twentieth-century anthropology or linguistics classes, it is easy to suppose that an incomprehensible language is merely babbling.
Now, I can only venture this highly tentatively now. But what I "know" but cannot prove this year is: the reason languages like Keo and Ngada are so strangely streamlined on Flores is that an earlier ancestor of these languages, just as complex as its family members tend to be, was used as second language by these other people and simplified. Just as our classroom French and Spanish avoids or streamlines a lot of the "hard stuff," people who learn a language as adults usually do not master it entirely.
Specifically, I would hypothesize that the little people were gradually incorporated into modern human society over time—perhaps subordinated in some way—such that modern human children were hearing the little people's rendition of the language as much as a native one.
This kind of process is why, for example, Afrikaans is a slightly simplified version of Dutch. Dutch colonists took on Bushmen as herders and nurses, and their children often heard second-language Dutch as much as their parents. Pretty soon, this new kind of Dutch was everyone's everyday language, and Afrikaans was born.
Much has been made over the parallels between the evolution of languages and the evolution of animals and plants. However, I believe that one important difference is that while animals and plants can evolve towards simplicity as well as complexity depending on conditions, languages do not evolve towards simplicity in any significant, overall sense—unless there is some sociohistorical factor that puts a spoke in the wheel.
So normally, languages are always drifting into being like Russian or Chinese or Navajo. They only become like Keo and Ngada—or Afrikaans, or creole languages like Papiamentu and Haitian, or even, I believe, English—because of the intervention of factors like forced labor and population relocation. Just maybe, we can now add interspecies contact to the list!
The "rotten-to-the-core" assumption about human nature espoused so widely in the social sciences and the humanities is wrong. This premise has its origins in the religious dogma of original sin and was dragged into the secular twentieth century by Freud, reinforced by two world wars, the Great Depression, the cold war, and genocides too numerous to list. The premise holds that virtue, nobility, meaning, and positive human motivation generally are reducible to, parasitic upon, and compensations for what is really authentic about human nature: selfishness, greed, indifference, corruption and savagery. The only reason that I am sitting in front of this computer typing away rather than running out to rape and kill is that I am "compensated," zipped up, and successfully defending myself against these fundamental underlying impulses.
In spite of its widespread acceptance in the religious and academic world, there is not a shred of evidence, not an iota of data, which compels us to believe that nobility and virtue are somehow derived from negative motivation. On the contrary, I believe that evolution has favored both positive and negative traits, and many niches have selected for morality, co-operation, altruism, and goodness, just as many have also selected for murder, theft, self-seeking, and terrorism.
More plausible than the rotten-to-the-core theory of human nature is the dual aspect theory that the strengths and the virtues are just as basic to human nature as the negative traits: that negative motivation and emotion have been selected for by zero-sum-game survival struggles, while virtue and positive emotion have been selected for by positive sum game sexual selection. These two overarching systems sit side by side in our central nervous system ready to be activated by privation and thwarting, on the one hand, or by abundance and the prospect of success, on the other.
believe, but cannot prove, that babies and young children are
actually more conscious, more vividly aware of their
external world and internal life, than adults are. I believe
this because there is strong evidence for a functional trade-off
with development. Young children are much better than adults
at learning new things and flexibly changing what they think
about the world. On the other hand, they are much worse at using
their knowledge to act in a swift, efficient and automatic way.
They can learn three languages at once but they can't tie their
In 1974, Marvin Minsky wrote that "there is room in the anatomy and genetics of the brain for much more mechanism than anyone today is prepared to propose." Today, many advocates of evolutionary and domain-specific psychology are in fact willing to propose the richness of mechanism that Minsky called for thirty years ago. For example, I believe that the mind is organized into cognitive systems specialized for reasoning about object, space, numbers, living things, and other minds; that we are equipped with emotions triggered by other people (sympathy, guilt, anger, gratitude) and by the physical world (fear, disgust, awe); that we have different ways for thinking and feeling about people in different kinds of relationships to us (parents, siblings, other kin, friends, spouses, lovers, allies, rivals, enemies); and several peripheral drivers for communicating with others (language, gesture, facial expression).
When I say I believe this but cannot prove it, I don't mean that it's a matter of raw faith or even an idiosyncratic hunch. In each case I can provide reasons for my belief, both empirical and theoretical. But I certainly can't prove it, or even demonstrate it in the way that molecular biologists demonstrate their claims, namely in a form so persuasive that skeptics can't reasonably attack it, and a consensus is rapidly achieved. The idea of a richly endowed human nature is still unpersuasive to many reasonable people, who often point to certain aspects of neuroanatomy, genetics, and evolution that appear to speak against it. I believe, but cannot prove, that these objections will be met as the sciences progress.
At the level of neuroanatomy and neurophysiology, critics have pointed to the apparent homogeneity of the cerebral cortex and of the seeming interchangeability of cortical tissue in experiments in which patches of cortex are rewired or transplanted in animals. I believe that the homogeneity is an illusion, owing to the fact that the brain is a system for information processing. Just as all books look the same to someone who does not understand the language in which they are written (since they are all composed of different arrangements of the same alphanumeric characters), and the DVD's of all movies look the same under a microscope, the cortex may look homogeneous to the eye but nonetheless contain different patterns of connectivity and synaptic biases that allow it to compute very different functions. I believe this these differences will be revealed in different patterns of gene expression in the developing cortex. I also believe that the apparent interchangeability of cortex occurs only in early stages of sensory systems that happen to have similar computational demands, such as isolating sharp signal transitions in time and space.
At the level of genetics, critics have pointed to the small number of genes in the human genome (now thought to be less than 25,000) and to their similarity to those of other animals. I believe that geneticists will find that there is a large store of information in the noncoding regions of the genome (the so-called junk DNA), whose size, spacing, and composition could have large effects on how genes are expressed. That is, the genes themselves may code largely for the meat and juices of the organism, which are pretty much the same across species, whereas how they are sculpted into brain circuits may depend on a much larger body of genetic information. I also believe that many examples of what we call "the same genes" in different species may differ in tiny ways at the sequence level that have large consequences for how the organism is put together.
And at the level of evolution, critics have pointed to how difficult it is to establish the adaptive function of a psychological trait. I believe this will change as we come to understand the genetic basis of psychological traits in more detail. New techniques in genomic analysis, which look for statistical fingerprints of selection in the genome, will show that many genes involved in cognition and emotion were specifically selected for in the primate, and in many cases the human, lineage.
I believe there is an external reality and you are not all figments of my imagination. My friend asks me through the steam he blows off the surface of his coffee, how I can trust the laws of physics back to the origins of the universe. I ask him how he can trust the laws of physics down to his cup of coffee. He shows every confidence that the scalding liquid will not spontaneously defy gravity and fly up in his eyes. He lives with this confidence born of his empirical experience of the world. His experiments with gravity, heat, and light began in childhood when he palpated the world to test its materials. Now he has a refined and well-developed theory of physics, whether expressed in equations or not.
I simultaneously believe more and less than he does. It is rational to believe what all of my empirical and logical tests of the world confirm—that there is a reality that exists independent of me. That the coffee will not fly upwards. But it is a belief nonetheless. Once I've gone that far, why stop at the perimeter of mundane experience? Just as we can test the temperature of a hot beverage with a tongue, or a thermometer, we can test the temperature of the primordial light left over from the big bang. One is no less real than the other simply because it is remarkable.
But how do I really know? If I measure the temperature of boiling water, all I really know is that mercury climbs a glass tube. Not even that, all I really know is that I see mercury climb a glass tube. But maybe the image in my mind's eye isn't real. Maybe nothing is real, not the mercury, not the glass, not the coffee, not my friend. They are all products of a florid imagination. There is no external reality, just me. Einstein? My creation. Picasso? My mind's forgery. But this solopsism is ugly and arrogant. How can I know that mathematics and the laws of physics can be reasoned down to the moment of creation of time, space, the entire universe? In the very same way that my friend believes in the reality of the second double cappuccino he orders. In formulating our beliefs, we are honest and critical and able to admit when we are wrong—and these are the cornerstones of truth.
When I leave the café, I believe the room of couches and tables is still on the block at 122nd Street, that it is still full of people, and that they haven't evaporated when my attention drifts away. But if I am wrong and there is no external reality, then not only is this essay my invention, but so is the web, edge.org, all of its participants and their ingenious ideas. And if you are reading this, I have created you too. But if I am wrong and there is no external reality, then maybe it is me who is a figment of your imagination and the cosmos outside your door is your magnificent creation.
The electron has been with us for over a century, laying the foundations to the electronic revolution and all of information technology. It is believed to be a point-like, elementary and indivisible particle. Is it?
The neutrino, more than a million times lighter than the electron, was predicted in the 1920's and discovered in the 1950's. It plays a crucial role in the creation of the stars, the sun and the heavy elements. It is elusive, invisible and weakly interacting. It is also considered fundamental and indivisible. Is it?
Quarks do not exist as free objects, except at extremely tiny distances, deep within the confines of the particles which are constructed from them. Since the 1960's we believe that they are the most fundamental indivisible building blocks of protons, neutrons and nuclei. Are they?
Nature has created two additional, totally unexplained, replicas of the electron, the neutrino and the most abundant quarks, u and d, forming three "generations" of fundamental particles. Each "generation" of particles is identical to the other two in all properties, except that the particle masses are radically different. Since each "generation" includes four fundamental particles, we end up with 12 different particles, which are allegedly indivisible, point-like and elementary. Are they?
The Atom, the nucleus and the proton, each in its own time, were considered elementary and indivisible, only to be replaced later by smaller objects as the fundamental building blocks. How can we be so arrogant as to exclude the possibility that this will happen again? Why would nature arbitrarily produce 12 different objects, with a very orderly pattern of electric charges and "color forces", with simple charge ratios between seemingly unrelated particles (such as the electron and the quark) and with a pattern of masses, which appears to be taken from the results of a lottery? Doesn't this "smell" again of further sub-particle structure?
There is absolutely no experimental evidence for a further substructure within all of these particles. There is no completely satisfactory theory which might explain how such light and tiny particles can contain objects moving with enormous energies, a requirement of quantum mechanics. This is, presumably, why the accepted "party line" of particle physicists is to assume that we already have reached the most fundamental level of the structure of matter.
For over twenty years, the hope has been that the rich spectrum of so-called fundamental particles will be explained as various modes of string vibrations and excitations. The astonishingly tiny string or membrane, rather than the point-like object, is allegedly at the bottom of the ladder describing the structure of matter. However, in spite of absolutely brilliant and ingenious mathematical work, not one experimental number has been explained in more than twenty years, on the basis of the string hypothesis.
Based on common sense and on an observation of the pattern of the known particles, without any experimental evidence and without any comprehensive theory, I have believed for many years, and I continue to believe, that the electron, the neutrino and the quarks are divisible. They are presumably made of different combinations of the same small number (two?) of more fundamental sub-particles. The latter may or may not have the string structure, and may or may not be themselves composites.
Will we live to see the components of the electron?
One of the biggest of the Big Questions of existence is, Are we alone in the universe? Science has provided no convincing evidence one way or the other. It is certainly possible that life began with a bizarre quirk of chemistry, an accident so improbable that it happened only once in the entire observable universe—and we are it. On the other hand, maybe life gets going wherever there are earthlike planets. We just don't know, because we have a sample of only one. However, no known scientific principle suggests an inbuilt drive from matter to life. No known law of physics or chemistry favors the emergence of the living state over other states. Physics and chemistry are, as far as we can tell, "life blind."
Yet I don't believe that life is a freak event. I think the universe is teeming with it. I can't prove it; indeed, it could be that mankind will never know the answer for sure. If we find life in our solar system, it most likely got there from Earth (or vice versa) in rocks kicked off planets by comet impacts. And to go beyond the solar system is the stuff of dreams. The best hope is that we develop instruments sensitive enough to detect life on extra-solar planets from Earth orbit. But, whilst not impossible, this is a formidable technical challenge.
So why do I think we are not alone, when we have no evidence for life beyond Earth? Not for the fallacious popular reason: "the universe is so big there must be life out there somewhere." Simple statistics shows this argument to be bogus. If life is in fact a freak chemical event, it would be so unlikely to occur that it wouldn't happen twice among a trillion trillion trillion planets. Rather, I believe we are not alone because life seems to be a fundamental, and not merely an incidental, property of nature. It is built into the great cosmic scheme at the deepest level, and therefore likely to be pervasive. I make this sweeping claim because life has produced mind, and through mind, beings who do not merely observe the universe, but have come to understand it through science, mathematics and reasoning. This is hardly an insignificant embellishment on the cosmic drama, but a stunning and unexpected bonus. Somehow life is able to link up with the basic workings of the cosmos, resonating with the hidden mathematical order that makes it tick. And that's a quirk too far for me.
orthodoxy in biology states that every cell in your body
shares exactly the same DNA. It's your identity, your
indelible fingerprint, and since all the cells in your
body have been duplicated from your initial unique stem
cell these zillion of offspring cells all maintain your
singular DNA sequence. It follows then that when you
submit a tissue sample for genetic analysis it doesn't
matter where it comes from. Normally technicians grab
some from the easily accessible pars of your mouth, but
they could just as well take some from your big toe,
or your liver, or eyelash and get the same results.
While I have no evidence for my belief right now, it is a provable
assertion. It will be shown to be true or false as soon
as we have ubiquitous cheap full-genome sequences at
discount mall prices. That is, pretty soon. I believe
that once we have a constant reading of our individual
full DNA (many times over our lives) we will have no
end of surprises. I would not be surprised to discover
that pet owners accumulate some tiny fragments of their
pet's DNA,which has somehow been laterally transferred
via viruses to their own cellular DNA. Or that diary
farmers amass noticeable fragments of bovine DNA. Or
that the DNA in our limbs somehow drift genetically in
a "limby" way, distinct from the variation
in the cells in our nervous systems.
string theory a futile exercise as physics, as I believe
it to be? It is an interesting mathematical specialty
and has produced and will produce mathematics useful
in other contexts, but it seems no more vital as mathematics
than other areas of very abstract or specialized math,
and doesn't on that basis justify the incredible amount
of effort expended on it.
days, it seems obvious that the mind arises from the
b rain (not the heart, liver, or some other organ).
In fact, I personally have gone so far as to claim
that "the mind is what the brain does." But
this notion does not preclude an unconventional idea:
Your mind may arise not simply from your own brain,
but in part from the brains of other people.
first is that our brains are limited, and so we use
crutches to supplement and extend our abilities. For
example, try to multiply 756 by 312 in your head. Difficult,
right? You would be happier with a pencil and piece
of paper—or, better yet, an electronic calculator.
These devices serve as prosthetic systems, making up
for cognitive deficiencies (just as a wooden leg would
make up for a physical deficiency).
For me, this is an easy question. I believe that animals have feelings and other states of consciousness, but neither I, nor anyone else, has been able to prove it. We can't even prove that other people are conscious, much less other animals. In the case of other people, though, we at least can have a little confidence since all people have brains with the same basic configurations. But as soon as we turn to other species and start asking questions about feelings, and consciousness in general, we are in risky territory because the hardware is different.
When a rat is in danger, it does things that many other animals do. That is, it either freezes, runs away or fights back. People pretty much do the same things. Some scientists say that because a rat and a person act the same in similar situations, they have the same kinds of subjective experiences. I don't think we can really say this.
There are two aspects of brain hardware that make it difficult for us to generalize from our personal subjective experiences to the experiences of other animals. One is the fact that the circuits most often associated with human consciousness involve the lateral prefrontal cortex (via its role in working memory and executive control functions). This broad zone is much more highly developed in people than in other primates, and whether it exists at all in non-primates is questionable. So certainly for those aspects of consciousness that depend on the prefrontal cortex, including aspects that allow us to know who we are and to make plans and decisions, there is reason to believe that even other primates might be different than people. The other aspect of the brain that differs dramatically is that humans have natural language. Because so much of human experience is tied up with language, consciousness is often said to depend on language. If so, then most other animals are ruled out of the consciousness game. But even if consciousness doesn't depend on language, language certainly changes consciousness so that whatever consciousness another animal has it is likely to differ from most of our states of consciousness.
For these reasons, I think it is hard to know what consciousness might be like in another animal. If we can't measure it (because it is internal and subjective) and can't use our own experience to frame questions about it (because the hardware that makes it possible is different), it become difficult to study.
Most of what I have said applies mainly to the content of conscious experience. But there is another aspect of consciousness that is less problematic scientifically. It is possible to study the processes that make consciousness possible even if we can't study the content of consciousness in other animals. This is exactly what is done in studies of working memory in non-human primates. One approach by that has had some success in the area of conscious content in non-human primates has focused on a limited kind of consciousness, visual awareness. But this approach, by Koch and Crick, mainly gets at the neural correlates of consciousness rather than the causal mechanisms. The correlates and the mechanisms may be the same, but they may not. Interestingly, this approach also emphasizes the importance of prefrontal cortex in making visual awareness possible.
So what about feelings? My view is that a feeling is what happens when an emotion system, like the fear system, is active in a brain that can be aware of its own activities. That is, what we call "fear" is the mental state that we are in when the activity of the defense system of the brain (or the consequences of its activity, such as bodily responses) is what is occupying working memory. Viewed this way, feelings are strongly tied to those areas of the cortex that are fairly unique to primates and especially well developed in people. When you add natural language to the brain, in addition to getting fairly basic feelings you also get fine gradations due to the ability to use words and grammar to discriminate and categorize states and to attribute them not just to ourselves but to others.
There are other views about feelings. Damasio argues that feelings are due to more primitive activity in body sensing areas of the cortex and brainstem. Pankseep has a similar view, though he focuses more on the brainstem. Because this network has not changed much in the course of human evolution, it could therefore be involved in feelings that are shared across species. I don't object to this on theoretical grounds, but I don't think it can be proven because feelings can't be measured in other animals. Pankseep argues that if it looks like fear in rats and people, it probably feels like fear in both species. But how do you know that rats and people feel the same when they behave the same? A cockroach will escape from danger--does it, too, feel fear as it runs away? I don't think behavioral similarity is sufficient grounds for proving experiential similarity. Neural similarity helps—rats and people have similar brainstems, and a roach doesn't even have a brain. But is the brainstem responsible for feelings? Even if it were proven in people, how would you prove it in a rat?
So now we're back where we started. I think rats and other mammals, and maybe even roaches (who knows?), have feelings. But I don't know how to prove it. And because I have reason to think that their feelings might be different than ours, I prefer to study emotional behavior in rats rather than emotional feelings. I study rats because you can make progress at the neural level, provided that the thing you measure is the same in rats and people. I wouldn't study language and consciousness in rats, so I don't study feelings either, because I don't know that they exist. I may be accused of being short-sighted for this, but I'd rather make progress on something I can study in rats than beat my head against the consciousness wall in these creatures.
There's lots to learn about emotion through rats that can help people with emotional disorders. And there's lots we can learn about feelings from studying humans, especially now that we have powerful function imaging techniques. I'm not a radical behaviorist. I'm just a practical emotionalist.
What do you believe is true even though you cannot prove it?
Naturally, this question has a technical spin for me. My current passion is the creation of tools for personal fabrication based on additive digital assembly, so that the uses of advanced technologies can be defined by their users. It's still no more than an assumption that that will lead to more good things than bad things being made, but, like the accumulated experience that democracy works better than monarchy, I have more faith in a future based on widespread access to the means for invention than one based on technocracy.
First it was felt that the Earth was the center of the universe, then that our Sun was the center, and so on. Ultimately we now realize that we are located at the edge of a random galaxy that is itself located nowhere special in a large, potentially infinite universe full of other galaxies. Moreover, we now know that even the stars and visible galaxies themselves are but an insignificant bit of visible pollution in a universe that is otherwise dominated by 'stuff' that doesn't shine.
Dark matter dominates the masses of galaxies and clusters by a factor of 10 compared to normal matter. And now we have discovered that even matter itself is almost insignificant. Instead empty space itself contains more than twice as much energy as that associated with all matter, including dark matter, in the universe. Further, as we ponder the origin of our universe, and the nature of the strange dark energy that dominates it, every plausible theory that I know of suggests that the Big Bang that created our visible universe was not unique. There are likely to be a large, and possibly infinite number of other universes out there, some of which may be experiencing Big Bangs at the current moment, and some of which may have already collapsed inward into Big Crunches. From a philosophical perspective this may be satisfying to some, who find a universe with a definite beginning but no definite end dissatisfying. In this case, in the 'metaverse', or 'multiverse' things may seem much more uniform in time.
At every instant there may be many universes being born, and others dying. But philosophy aside, the existence of many different causally disconnected universes—regions with which we will never ever be able to have direct communication, and thus which will forever be out of reach of direct empirical verification—may have significant impacts on our understanding of our own universe. Their existence may help explain why our own universe has certain otherwise unexpected features, because in a metaverse with a possibly infinite number of different universes, which may themselves vary in their fundamental features, it could be that life like our own would evolve in only universes with a special set of characteristics.
Whether or not this anthropic type of argument is necessary to understand our universe—and I personally hope it isn't—I nevertheless find it satisfying to think that it is likely that not only are we not located in a particularly special place in our universe, but that our universe itself may be relatively insignificant on a larger cosmic scale. It represents perhaps the ultimate Copernican Revolution.
the first year, an infant is busy creating categories for the
speech sounds she hears. By the second year, the toddler is busy
picking up new words, each composed of a series of those phoneme
building blocks. In the third year, she starts picking up on
those typical combinations of words that we call grammar or syntax.
She soon graduates to speaking long structured sentences. In
the fourth year, she infers a patterning to the sentences and
starts demanding proper endings for her bedtime stories. It is
pyramiding, using the building blocks at the immediately subjacent
level. Four levels in four years!
Then tuning up the workspace for structured language in the preschool years would likely carry over to those other structured aspects of intellect. That's why I like the emphasis on acquiring language as a precondition for consciousness: tuning up to sentence structure might make the child better able to perform at nonlanguage tasks which also need some structuring. Improve one, improve them all?
Is that what boosts our cleverness and intelligence? Is "our kind of consciousness" nothing but structured intellect with good quality control? Can't prove it, but it sure looks like a good candidate.
This assertion is shocking to many people, who fear that it would demote animals and pre-linguistic children from moral protection, but this would not follow. Whose pain is the pain occurring in the newborn infant? There is not yet anybody whose pain it is, but that fact would not license us to inflict painful stimuli on babies or animals any more than we are licensed to abuse the living bodies of people in comas who are definitely not conscious. If selfhood develops gradually, then certain types of events only gradually become experiences, and there will be no sharp line between unconscious pains (if we may call them that) and conscious pains, and both will merit moral attention. (And, of course, the truth of the empirical hypothesis is in any case strictly independent of its ethical implications, whatever they are. Those who shun the hypothesis on purely moral grounds are letting wishful thinking overrule a properly inquisitive scientific attitude. I am happy to give animals and small children "the benefit of the doubt" for moral purposes, but not for scientific purposes. Those who are shocked by my hypothesis should pause, if they can bear it, to notice that it is as just as difficult to prove its denial as its assertion. But it can, I think, be proven eventually. Here's what it will take, one way or the other:
This is an empirical hypothesis, and it could just as well be proven false. It could be proven false by showing that in fact the necessary pathways functionally uniting the relevant brain systems (in the ways I claim are required for consciousness) are already provided in normal infant or fetal development, and are in fact present in, say, all mammalian nervous systems of a certain maturity. I doubt that this is true because it seems clear to me that evolution has already demonstrated that remarkable varieties of adaptive coordination can be accomplished without such hyper-unifying meta-systems, by colonies of social insects, for instance. What is it like to be an ant colony? Nothing, I submit, and I think most would agree intuitively. What is it like to be a brace of oxen? Nothing (even if it is like something to be a single ox). But then we have to take seriously the extent to which animals–not just insect colonies and reptiles, but rabbits, whales, and, yes, bats and chimpanzees–can get by with somewhat disunified brains.
Evolution will not have provided for the further abilities where they were not necessary for members of these species to accomplish the tasks their lives actually pose them. If animals were like the imaginary creatures in the fictions of Beatrix Potter or Walt Disney, they would have to be conscious pretty much the way we are. But animals are more different from us than we usually imagine, enticed as we are by these charming anthropomorphic fictions. We need these abilities to become persons, communicating individuals capable of asking and answering, requesting and forbidding and promising (and lying). But we don't need to be born with these abilities, since normal rearing will entrain the requisite neural dispositions. Human subjectivity, I am proposing, is thus a remarkable byproduct of human language, and no version of it should be extrapolated to any other species by default, any more than we should assume that the rudimentary communication systems of other species have verbs and nouns, prepositions and tenses.
Finally, since there is often misunderstanding on this score, I am not saying that all human consciousness consists in talking to oneself silently, although a great deal of it does. I am saying that the ability to talk to yourself silently, as it develops, also brings along with it the abilities to review, to muse, to rehearse, recollect, and in general engage the contents of events in one's nervous system that would otherwise have their effects in a purely "ballistic" fashion, leaving no memories in their wake, and hence contributing to one's guidance in ways that are well described as unconscious. If a nervous system can come to sustain all these abilities without having language then I am wrong.
Interspecies coevolution of languages on the Northwest Coast.
During the years I spent kayaking along the coast of British Columbia and Southeast Alaska, I observed that the local raven populations spoke in distinct dialects, corresponding surprisingly closely to the geographic divisions between the indigenous human language groups. Ravens from Kwakiutl, Tsimshian, Haida, or Tlingit territory sounded different, especially in their characteristic "tok" and "tlik."
I believe this correspondence between human language and raven language is more than coincidence, though this would be difficult to prove.
We take each other's consciousness on faith because we must, but after two thousand years of worrying about this issue, no one has ever devised a definitive test of its existence. Most cognitive scientists believe that consciousness is a phenomenon that emerges from the complex interaction of decidedly nonconscious parts (neurons), but even when we finally understand the nature of that complex interaction, we still won't be able to prove that it produces the phenomenon in question. And yet, I haven't the slightest doubt that everyone I know has an inner life, a subjective experience, a sense of self, that is very much like mine.
What do I believe is true but cannot prove? The answer is: You!
Here's my best guess: we alone evolved a simple computational trick with far reaching implications for every aspect of our life, from language and mathematics to art, music and morality. The trick: the capacity to take as input any set of discrete entities and recombine them into an infinite variety of meaningful expressions.
Thus, we take meaningless phonemes and combine them into words, words into phrases, and phrases into Shakespeare. We take meaningless strokes of paint and combine them into shapes, shapes into flowers, and flowers into Matisse's water lilies. And we take meaningless actions and combine them into action sequences, sequences into events, and events into homicide and heroic rescues.
I'll go one step further: I bet that when we discover life on other planets, that although the materials may be different for running the computation, that they will create open ended systems of expression by means of the same trick, thereby giving birth to the process of universal computation.
If this is right, it provides a simple explanation for why we, as scientists or laymen, find the "hard problem" of consciousness just so hard. Nature has meant it to be hard. Indeed "mysterian" philosophers—from Colin McGinn to the Pope—who bow down before the apparent miracle and declare that it's impossible in principle to understand how consciousness could arise in a material brain, are responding exactly as Nature hoped they would, with shock and awe.
Can I prove it? It's difficult to prove any adaptationist account of why humans experience things the way they do. But here there is an added catch. The Catch-22 is that, just to the extent that Nature has succeeded in putting consciousness beyond the reach of rational explanation, she must have undermined the very possibility of showing that this is what she's done.
But nothing's perfect. There may be a loophole. While it may seem—and even be—impossible for us to explain how a brain process could have the quality of consciousness, it may not be at all impossible to explain how a brain process could (be designed to) give rise to the impression of having this quality. (Consider: we could never explain why 2 + 2 = 5, but we might relatively easily be able to explain why someone should be under the illusion that 2 + 2 = 5).
Do I want to prove it? That's a difficult one. If the belief that consciousness is a mystery is a source of human hope, there may be a real danger that exposing the trick could send us all to hell.
believe that human talents are based on distinct patterns
of brain connectivity. These patterns can be observed as
the individual encounters and ultimately masters an organized
activity or domain in his/her culture.
My Account: The most apt analogy is language learning. Nearly all of us can easily master natural languages in the first years of life. We might say that nearly all of us are talented speakers. An analogous process occurs with respect to various talents, with two differences:
we attempt to master an activity, neural connections of
varying degrees of utility or disutility form. Certain
of us have nervous systems that are predisposed to develop
quickly along the lines needed to master specific activities
(chess) or classes of activities (mathematics) that happen
to be available in one or more cultures. Accordingly, assuming
such exposure, we will appear talented and become experts
quickly. The rest of us can still achieve some expertise,
but it will take longer, require more effective teaching,
and draw on intellectual faculties and brain networks that
the talented person does not have to use.
If Account #1 is true, hours of practice will explain all. If #2 is true, those best at music should excel at all activities. If #3 is true, individual brain differences should be observable from the start. If my account is true, the most talented students will be distinguished not by differences observable prior to training but rather by the ways in which their neural connections alter during the first years of training.