| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >




2011

WHAT SCIENTIFIC CONCEPT WOULD IMPROVE EVERYBODY'S COGNITIVE TOOLKIT?

EVGENY MOROZOV
Commentator on Internet and Politics, "Net Effect" blog; Contributing Editor, Foreign Policy; Author, The Net Delusion: The Dark Side of Internet Freedom

Einstellung Effect

Constant awareness of the Eintstellung Effect would make a useful addition to our cognitive toolkit. 

The Einstellung Effect is more ubiquitous than its name suggests. We constantly experience it when trying to solve a problem by pursuing solutions that have worked for us in the past - instead of evaluating and addressing it on its own terms. Thus, while we may eventually solve the problem, we may also be wasting an opportunity to so in a more rapid, effective, and resourceful manner. 

Think of a chess match. If you are a chess master with a deep familarity with chess history, you are likely to spot game developments that look similar to other matches that you know by heart. Knowing how those previous matches unfolded, you may automatically pursue similar solutions. 

This may be the right thing to do in matches that are exactly alike - but in all other situations, you've got to watch out! Familar solutions may not be optima. Some recent research into the occurences of the Einstellung Effect in chess players suggests that it tends to be less prominent once players reach a certain level of mastery, getting a better grasp of the risks associated with pursuing solutions that look familiar and trying to avoid acting on "autopilot".  

The irony here is that the more expansive our cognitive toolkit, the more likely we are to fall back on solutions and approaches that have worked in the past instead of asking whether the problem in front of us is fundamentally different from anything else we have dealt with in the past. A cognitive toolkit that has no built-in awareness of the Einstellung Effect seems somewhat defective to me. 


PAUL BLOOM
Psychologist, Yale University; Author, How Pleasure Works

Reason

We are powerfully influenced by irrational processes such as unconscious priming, conformity, groupthink, and self-serving biases. These affect the most trivial aspects of our lives, such as how quickly we walk down a city street, and the most important, such as who we choose to marry. The political and moral realms are particularly vulnerable to such influences. While many of us would like to think that our views on climate change or torture or foreign policy are the result of rational deliberation, we are more affected than we would like to admit by considerations that have nothing to do with reason. 

But this is not inevitable. Consider science. Plainly, scientists are human and possess the standard slate of biases and prejudices and mindbugs. This is what skeptics emphasize when they say that science is "just another means of knowing" or "just like religion". But science also includes procedures — such as replicable experiments and open debate — that cultivate the capacity for human reason. Scientists can reject common wisdom, they can be persuaded by data and argument to change their minds. It is through these procedures that we have discovered extraordinary facts about the world, such as the structure of matter and the evolutionary relationship between monkey and man.

The cultivation of reason isn't unique to science; other disciplines such as mathematics and philosophy possess it as well. But it is absent in much of the rest of life. So I admit to twisting the question a bit: The concept that people need to add to their toolkit isn't a scientific discovery; it is science itself. Wouldn't the world be better off if, as we struggle with moral and political and social problems, we adopted those procedures that make science so successful?


EDUARDO SALCEDO-ALBARÁN
Philosopher; Founder, Manager, Metodo

Homo Sensus-Sapiens: The animal that feels and rationalizes

For the last three years, Mexican narcotraffickers have decapitated hundreds of people to gain control of routes for transporting cocaine. In the last two decades Colombian narco-paramilitaries tortured and incinerated thousands of people, in part, because they needed more land for their crops and for transporting cocaine. In both cases, they were not satisfied with 10 or 100 million dollars; even the richest narcotraffickers, kill or die for more.

In Guatemala and Honduras, cruel mortal battles between gangs known as "maras", happen for gaining control of a street in a poor neighborhood. In Rwanda's genocide, in 1994, people who had been friends for their entire life suddenly became mortal enemies, because of their ethnic appearance.

Is this the enlightened man?

These cases may sound like rarities. However, in any city, in any random street, it is easy to find a thief who is willing to kill or die for 10 bucks to satisfy the need for heroin, a fanatic who is willing to kill or die for defending a "merciful God", or a regular guy next-door willing to kill or die in a fight after a car crash.

Is this rationality?

It is easy to find examples in which automatic responses of emotions and feelings, like ambition, anger or anxiety overcome rationality. Those responses keep assaulting us like uncontrollable forces of nature; like earthquakes or storms.

We modern humans, taxonomically define ourselves as Homo Sapiens Sapiens, that is, wise-wise beings. Apparently, we can dominate the influence of natural forces, no matter if they are instincts, viruses or storms. Homo Sapiens Sapiens represents the overconfidence of the enlightened man who understands and manipulates nature, while making the best decisions. However, we cannot avoid destroying natural resources while consuming more than we need. We cannot control excessive ambition. We cannot avoid surrendering to the power of sex or money. Despite our evolved brain, despite our capacity to argue and think in abstract ways, despite the amazing power of the neocortex, inner feelings are still at the base of our behavior.

The WisdomX2 characteristic typically does not coincide exactly with our neuropsychological reality. To discover it, you can pay attention to your everyday actions, you can trust neurological observations showing how instinctive areas of the brain are active most of the time or you can trust evidence showing how our nervous system is constantly at the mercy of neurotransmitters and hormones determining levels of emotional responses.

Observations from experimental psychology and behavioral economics also show that people do not always try to maximize present or future profits. Rational expectations, once thought as the main characteristic of Homo Economicus are not neurologically sustainable anymore. Sometimes people care nothing about future or profit; sometimes, we only want to satisfy a desire, right here, right now, no matter what.

Human beings have unique rational capacities indeed. No other animal can evaluate, simulate and decide for the best, like humans do. However, "having" the capacity doesn't imply "executing" it.

The inner and oldest areas of human's brain, the reptilian brain, generate and regulate instinctive and automatic responses, which have a role in preserving the organism. Because of these areas, we move without analyzing the consequence of each action; we move like a machine of automatic and unconscious induction. We walk without determining if the floor's structure will remain after each step and we run faster than normal when we feel a threat, not because of rational planning, but because of automatic responses.

Only a strict training allows us to dominate instincts. However, for most of us, the "don't panic" advice only works when we are not in panic. Most of us should be defined as beings firstly moved by instincts, social empathy and automatic responses resulting from perceptions, instead of sophisticated plans and arguments.

Homo economicus and Homo Politicus are, therefore, normative entelechies, behavioral benchmarks instead of descriptive models. Always calculating utility and always resolving social disputes through civilized debates are behavioral utopias instead of adjusted descriptions of what we are. However, for decades we've been constructing policies, models and sciences based on these assumptions not coinciding with reality.

Homo Sensus Sapiens is a more accurate image of the human being.

The concepts of the liberal hyper-rationalist man and the conservative hyper-communitarian man are hypertrophies of a single human facet. The first one is the hypertrophy of the neocortex: the idea that rationality dominates instincts. The second one is the hypertrophy of the inner reptilian brain: the idea that social empathy and cohesive institutions define humanity. However we are both at the same time. We are the tension of the sensus and the sapiens.

The concept of Homo Sensus Sapiens allows us to realize that we are at a point somewhere between overconfidence on rational capacities, and resignation to instincts. Homo Sensus Sapiens reminds us that we cannot surrender or escape from rationality or instincts. But this concept is not only about criticizing overconfidence or resignation. It is about improving explanations of social phenomena. Social Scientists should not always choose between rationality/irrationality. They should get out of the comfort zone of positivist fragmentation, and integrate scientific areas to explain an analogue human being, not a digital one, defined by the continuum between sensitivity and rationality. Better inputs for public policy would be proposed with this adjusted image.

The first character of this Homo, the Sensus, allows movement, reproduction, atomization of his biology, and preservation of the species. The second part, the Sapiens, allows this Homo to psychologically oscillate between the ontological world of matter and energy, and the epistemological world of socio-cultural codification, imagination, arts, technology and symbolic construction. This combination allows understanding of the nature of a hominid characterized by the constant tension between emotions and reason, and the search of a middle point of biological and cultural evolution. We are not only fears, not only plans. We are Homo Sensus Sapiens, the animal that feels and rationalizes.


JOHN TOOBY
Founder of field of Evolutionary Psychology; Co-Director, UC Santa Barbara's Center for Evolutionary Psychology

Nexus causality, moral warfare and misattribution arbitrage.

We could become far more intelligent than we are by adding to our stock of concepts, and by forcing ourselves to use them even when we don't like what they are telling us. This will be nearly always, because they generally tell us that our self-evidently superior selves and ingroups are error-besotted. We all start from radical ignorance in a world that is endlessly strange, vast, complex, intricate, and surprising. Deliverance from ignorance lies in good concepts — inference fountains that geyser out insights that organize and increase the scope of our understanding. We are drawn to them by the fascination of the discoveries they afford, but resist using them well and freely because they would reveal too many of our apparent achievements to be embarrassing or tragic failures. Those of us who are non-mythical lack the spine that Oedipus had — the obsidian resolve that drove him to piece together shattering realizations despite portents warning him off. Because of our weakness, "to see what is in front of one's nose needs a constant struggle" as Orwell says. So why struggle? Better instead to have one's nose and what lies beyond shift out of focus — to make oneself hysterically blind as convenience dictates, rather than to risk ending up like Oedipus, literally blinding oneself in horror at the harvest of an exhausting, successful struggle to discover what is true.

Alternatively, even modest individual-level improvements in our conceptual toolkit can have transformative effects on our collective intelligence, promoting incandescent intellectual chain reactions among multitudes of interacting individuals. If this promise of intelligence-amplification through conceptual tools seems like hyperbole, consider that the least inspired modern engineer, equipped with the conceptual tools of calculus, can understand, plan and build things far beyond what da Vinci or the mathematics-revering Plato could have achieved without it. We owe a lot to the infinitesimal, Newton's counterintuitive conceptual hack — something greater than zero but less than any finite magnitude. Far simpler conceptual innovations than calculus have had even more far reaching effects — the experiment (a danger to authority), zero, entropy, Boyle's atom, mathematical proof, natural selection, randomness, particulate inheritance, Dalton's element, distribution, formal logic, culture, Shannon's definition of information, the quantum…

Here are three simple conceptual tools that might help us see in front of our noses: nexus causality, moral warfare, and misattribution arbitrage. Causality itself is an evolved conceptual tool that simplifies, schematizes, and focuses our representation of situations. This cognitive machinery guides us to think in terms of the cause — of an outcome having a single cause. Yet for enlarged understanding, it is more accurate to represent outcomes as caused by an intersection or nexus of factors (including the absence of precluding conditions). In War and Peace, Tolstoy asks "When an apple ripens and falls, why does it fall? Because of its attraction to the earth, because its stem withers, because it is dried by the sun, because it grows heavier, because the wind shakes it….?" — with little effort any modern scientist could extend Tolstoy's list endlessly. We evolved, however, as cognitively improvisational tool-users, dependent on identifying actions we could take that lead to immediate payoffs. So, our minds evolved to represent situations in a way that highlighted the element in the nexus that we could manipulate to bring about a favored outcome. Elements in the situation that remained stable and that we could not change (like gravity or human nature) were left out of our representation of causes. Similarly, variable factors in the nexus (like the wind blowing) that we could not control, but that predicted an outcome (the apple falling), were also useful to represent as causes, in order to prepare ourselves to exploit opportunities or avoid dangers. So the reality of the causal nexus is cognitively ignored in favor of the cartoon of single causes. While useful for a forager, this machinery impoverishes our scientific understanding, rendering discussions (whether elite, scientific, or public) of the "causes" — of cancer, war, violence, mental disorders, infidelity, unemployment, climate, poverty, and so on — ridiculous.

Similarly, as players of evolved social games, we are designed to represent others' behavior and associated outcomes as caused by free will (by intentions). That is, we evolved to view "man" as Aristotle put it, as "the originator of his own actions." Given an outcome we dislike, we ignore the nexus, and trace "the" causal chain back to a person. We typically represent the backward chain as ending in — and the outcome as originating in — the person. Locating the "cause" (blame) in one or more persons allows us to punitively motivate others to avoid causing outcomes we don't like (or to incentivize outcomes we do like). More despicably, if something happens that many regard as a bad outcome, this gives us the opportunity to sift through the causal nexus for the one thread that colorably leads back to our rivals (where the blame obviously lies). Lamentably, much of our species' moral psychology evolved for moral warfare, a ruthless zero-sum game. Offensive play typically involves recruiting others to disadvantage or eliminate our rivals by publically sourcing them as the cause of bad outcomes. Defensive play involves giving our rivals no ammunition to mobilize others against us.

The moral game of blame attribution is only one subtype of misattribution arbitrage. For example, epidemiologists estimate that it was not until 1905 that you were better off going to a physician. (Semelweiss noticed that doctors doubled the mortality rate of mothers at delivery). For thousands of years, the role of the physician pre-existed its rational function, so why were there physicians? Economists, forecasters, and professional portfolio managers typically do no better than chance, yet command immense salaries for their services. Food prices are driven up to starvation levels in underdeveloped countries, based on climate models that cannot successfully retrodict known climate history. Liability lawyers win huge sums for plaintiffs who get diseases at no higher rates than others not exposed to "the" supposed cause. What is going on? The complexity and noise permeating any real causal nexus generates a fog of uncertainty. Slight biases in causal attribution, or in blameworthiness (e.g., sins of commission are worse than sins of omission) allow a stable niche for extracting undeserved credit or targeting undeserved blame. If the patient recovers, it was due to my heroic efforts; if not, the underlying disease was too severe. If it weren't for my macroeconomic policy, the economy would be even worse. The abandonment of moral warfare, and a wider appreciation of nexus causality and misattribution arbitrage would help us all shed at least some of the destructive delusions that cost humanity so much.


DAVID M. BUSS
Professor of Psychology, University of Texas, Austin; Coauthor: Why Women Have Sex; author, The Evolution of Desire: Strategies of Human Mating and Evolutionary Psychology: The New Science of the Mind

Sexual Selection

When most people think about evolution by selection, they conjure up phrases such as "survival of the fittest" or "nature red in tooth and claw." These focus attention on the Darwinian struggle for survival. Many scientists, but few others, know that evolution by selection occurs through the process of differential reproductive success by virtue of heritable differences in design, not by differential survival success. And differential reproductive success often boils down to differential mating success, the focus of Darwin's 1871 theory of sexual selection.

Darwin identified two separate (but potentially related) causal processes by which sexual selection occurs. The first, intrasexual or same-sex competition, involves members of one sex competing with each other in various contests, physical or otherwise, the winners of which gain preferential sexual access to mates. Qualities that lead to success evolve. Those linked to failure bite the evolutionary dust. Evolution, change over time, occurs as a consequence of the process of intrasexual competition. The second, intersexual selection, deals with preferential mate choice. If members of one sex exhibit a consensus about qualities desired in mates, and those qualities are partially heritable, then those of the opposite sex possessing the desired qualities have a mating advantage. They get preferentially chosen. Those lacking desired mating qualities get shunned, banished, and remain mateless (or must settle for low quality mates). Evolutionary change over time occurs as a consequence of an increase in frequency of desired traits and a decrease in frequency of disfavored traits.

Darwin's theory of sexual selection, controversial in his day and relatively neglected for nearly a century after its publication, has mushroomed today into a tremendously important theory in evolutionary biology and evolutionary psychology. Research on human mating strategies has exploded over the past decade, as the profound implications of sexual selection become more deeply understood. Adding sexual selection to everyone's cognitive toolkit will provide profound insights into many human phenomena that otherwise remain baffling. In its modern formulations, sexual selection theory provides answers to weighty and troubling questions that elude many scientists and most non-scientists living today:

• Why do male and female minds differ?
• What explains the rich menu of human mating strategies?
• Why is conflict between the sexes so pervasive?
• Why does conflict between women and men focus so heavily on sex?
• What explains sexual harassment and sexual coercion?
• Why do men die earlier than women, on average, in every culture around the world?
• Why are most murderers men?
• Why are men so much keener than women on forming coalitions for warfare?
• Why are men so much more prone to becoming suicide terrorists?
• Why is suicide terrorism so much more prevalent in polygynous cultures that create a greater pool of mateless males?

Adding sexual selection theory to everyone's cognitive toolkit, in short, provides deep insight into the nature of human nature, people's obsession with sex and mating, the origins of sex differences, and many of the profound social conflicts that beset us all.


BART KOSKO
Information Scientist, USC; Author, Noise

Q. E. D. Moments

Everyone should know what proof feels like. It reduces all other species of belief to a distant second-class status. Proof is the far end on a cognitive scale of confidence that varies through levels of doubt. And most people never experience it.

Feeling proof comes from finishing a proof. It does not come from pointing at a proof in a book or in the brain of an instructor. It comes when the prover himself takes the last logical step on the deductive staircase. Then he gets to celebrate that logical feat by declaring "Q. E. D." or "Quod Erat Demonstrandum" or just "Quite Easily Done." Q. E. D. states that he has proven or demonstrated the claim that he wanted to prove. The proof need not be original or surprising. It just needs to be logically correct to produce a Q. E. D. moment. A proof of the Pythagorean Theorem has always sufficed.

The only such proofs that warrant the name are those in mathematics and formal logic. Each logical step has to come with a logically sufficient justification. That way each logical step comes with binary certainty. Then the final result itself follows with binary certainty. It is as if the prover multiplied the number 1 by itself for each step in the proof. The result is still the number 1. That is why the final result warrants a declaration of Q. E. D. That is also why the process comes to an unequivocal halt if the prover cannot justify a step. Any act of faith or guesswork or cutting corners will destroy the proof and its demand for binary certainty.

The catch is that we can really only prove tautologies.

The great binary truths of mathematics are still logically equivalent to the tautology "1 = 1" or "green is green." This differs from the factual statements we make about the real world — statements such as "Pine needles are green" or "Chlorophyll molecules reflect green light."

These factual statements are approximations. They are technically vague or fuzzy. And they often come juxtaposed with probabilistic uncertainty: "Pine needles are green with high probability." Note that this last statement involves triple uncertainty. There is first the vagueness of green pine needles because there is no bright line between greenness and non-greenness. It is a matter of degree. There is second only a probability whether pine needles have the vague property of greenness. And there is last the magnitude of the probability itself. The magnitude is the vague or fuzzy descriptor "high" because here too there is no bright line between high probability and not-high probability.

No one has ever produced a statement of fact that has the same 100% binary truth status as a mathematical theorem. Even the most accurate energy predictions of quantum mechanics hold only out to a few decimal places. Binary truth would require getting it right out to infinitely many decimal places.

Most scientists know this and rightly sweat it. The logical premises of a math model only approximately match the world that the model purports to model. It is not at all clear how such grounding mismatches propagate through to the model's predictions. Each infected inferential step tends to degrade the confidence of the conclusion as if multiplying fractions less than one. Modern statistics can appeal to confidence bounds if there are enough samples and if the samples sufficiently approximate the binary assumptions of the model. That at least makes us pay in the coin of data for an increase in certainty.

It is a big step down from such imperfect scientific inference to the approximate syllogistic reasoning of the law. There the disputant insists that similar premises must lead to similar conclusions. But this similarity involves its own approximate pattern matching of inherently vague patterns of causal conduct or hidden mental states such as intent or foreseeability. The judge's final ruling of "granted" or "denied" resolves the issue in practice. But it is technically a non sequitir. The product of any numbers between zero and one is again always less than one. So the confidence of the conclusion can only fall as the steps in the deductive chain increase. The clang of the gavel is no substitute for proof.

Such approximate reasoning may be as close as we can come to a Q. E. D. moment when using natural language. The everyday arguments that buzz in our brains hit far humbler logical highs. That is precisely why we all need to prove something at least once — to experience at least one true Q. E. D. moment. Those rare but god-like tastes of ideal certainty can help keep us from mistaking it for anything else.


SUE BLACKMORE
Psychologist; Author, Consciousness: An Introduction

Correlation is not a cause

The phrase "correlation is not a cause" (CINAC) may be familiar to every scientist but has not found its way into everyday language, even though critical thinking and scientific understanding would improve if more people had this simple reminder in their mental toolkit.

One reason for this lack is that CINAC can be surprisingly difficult to grasp. I learned just how difficult when teaching experimental design to nurses, physiotherapists and other assorted groups. They usually understood my favourite example: imagine you are watching at a railway station. More and more people arrive until the platform is crowded, and then — hey presto — along comes a train. Did the people cause the train to arrive (A causes B)? Did the train cause the people to arrive (B causes A)? No, they both depended on a railway timetable (C caused both A and B).

I soon discovered that this understanding tended to slip away again and again, until I began a new regime, and started every lecture with an invented example to get them thinking.

"Right", I might say "Suppose it's been discovered (I don't mean it's true) that children who eat more tomato ketchup do worse in their exams. Why could this be?" They would argue that it wasn't true (I'd explain the point of thought experiments again). "But there'd be health warnings on ketchup if it's poisonous" (Just pretend it's true for now please) and then they'd start using their imaginations.

"There's something in the ketchup that slows down nerves", "Eating ketchup makes you watch more telly instead of doing your homework", "Eating more ketchup means eating more chips and that makes you fat and lazy". Yes, yes, probably wrong but great examples of A causes B — go on. And so to "Stupid people have different taste buds and don't like ketchup", "Maybe if you don't pass your exams your Mum gives you ketchup". And finally " "Poorer people eat more junk food and do less well at school".

Next week: "Suppose we find that the more often people consult astrologers or psychics the longer they live." "But it can't be true — astrology's bunkum" (Sigh … just pretend it's true for now please.) OK. "Astrologers have a special psychic energy that they radiate to their clients", "Knowing the future means you can avoid dying", "Understanding your horoscope makes you happier and healthier" Yes, yes, excellent ideas, go on. "The older people get the more often they go to psychics", "Being healthy makes you more spiritual and so you seek out spiritual guidance". Yes, yes, keep going, all testable ideas, and finally "Women go to psychics more often and also live longer than men."

The point is that once you greet any new correlation with "CINAC" your imagination is let loose. Once you listen to every new science story Cinacally (which conveniently sounds like "cynically") you find yourself thinking: OK, if A doesn't cause B, could B cause A? Could something else cause them both or could they both be the same thing even though they don't appear to be? What's going on? Can I imagine other possibilities? Could I test them? Could I find out which is true? Then you can be critical of the science stories you hear. Then you are thinking like a scientist.

Stories of health scares and psychic claims may get people's attention but understanding that a correlation is not a cause could raise levels of debate over some of today's most pressing scientific issues. For example, we know that global temperature rise correlates with increasing levels of atmospheric carbon dioxide but why? Thinking Cinacally means asking which variable causes which or whether something else causes both, with important consequences for social action and the future of life on earth.

Some say that the greatest mystery facing science is the nature of consciousness. We seem to be independent selves having consciousness and free will, and yet the more we understand how the brain works, the less room there seems to be for consciousness to do anything. A popular way of trying to solve the mystery is the hunt for the "neural correlates of consciousness". For example, we know that brain activity in parts of the motor cortex and frontal lobes correlates with conscious decisions to act. But do our conscious decisions cause the brain activity, does the brain activity cause our decisions, or are both caused by something else?

The fourth possibility is that brain activity and conscious experiences are really the same thing, just as light turned out not to be caused by electromagnetic radiation but to be electromagnetic radiation, or heat turned out to be the movement of molecules in a fluid. At the moment we have no inkling of how consciousness could be brain activity but my guess is that it will turn out that way. Once we clear away some of our delusions about the nature of our own minds, we may finally see why there is no deep mystery and our conscious experiences simply are what is going on inside our brains. If this is right then there are no neural correlates of consciousness. But whether it is or not, remembering CINAC and working slowly from correlations to causes is likely to be how this mystery is finally solved.


P.Z. MYERS
Biologist, University of Minnesota; blogger, Pharyngula

The Mediocrity Principle

As someone who just spent a term teaching freshman introductory biology, and will be doing it again in the coming months, I have to say that the first thing that leapt to my mind as an essential skill everyone should have was algebra. And elementary probability and statistics. That sure would make my life easier, anyway — there's something terribly depressing about seeing bright students tripped up by a basic math skill that they should have mastered in grade school.

But that isn't enough. Elementary math skills are an essential tool that we ought to be able to take for granted in a scientific and technological society. What idea should people grasp to better understand their place in the universe?

I'm going to recommend the mediocrity principle. It's fundamental to science, and it's also one of the most contentious, difficult concepts for many people to grasp — and opposition to the mediocrity principle is one of the major linchpins of religion and creationism and jingoism and failed social policies. There are a lot of cognitive ills that would be neatly wrapped up and easily disposed of if only everyone understood this one simple idea.

The mediocrity principle simply states that you aren't special. The universe does not revolve around you, this planet isn't privileged in any unique way, your country is not the perfect product of divine destiny, your existence isn't the product of directed, intentional fate, and that tuna sandwich you had for lunch was not plotting to give you indigestion. Most of what happens in the world is just a consequence of natural, universal laws — laws that apply everywhere and to everything, with no special exemptions or amplifications for your benefit — given variety by the input of chance. Everything that you as a human being consider cosmically important is an accident. The rules of inheritance and the nature of biology meant that when your parents had a baby, it was anatomically human and mostly fully functional physiologically, but the unique combination of traits that make you male or female, tall or short, brown-eyed or blue-eyed were the result of a chance shuffle of genetic attributes during meiosis, a few random mutations, and the luck of the draw in the grand sperm race at fertilization.

Don't feel bad about that, though, it's not just you. The stars themselves form as a result of the properties of atoms, the specific features of each star set by the chance distribution of ripples of condensation through clouds of dust and gas. Our sun wasn't required to be where it is, with the luminosity it has — it just happens to be there, and our existence follows from this opportunity. Our species itself is partly shaped by the force of our environment through selection, and partly by fluctuations of chance. If humans had gone extinct 100,000 years ago, the world would go on turning, life would go on thriving, and some other species would be prospering in our place — and most likely not by following the same intelligence-driven technological path we did.

And if you understand the mediocrity principle, that's OK.

The reason this is so essential to science is that it's the beginning of understanding how we came to be here and how everything works. We look for general principles that apply to the universe as a whole first, and those explain much of the story; and then we look for the quirks and exceptions that led to the details. It's a strategy that succeeds and is useful in gaining a deeper knowledge. Starting with a presumption that a subject of interest represents a violation of the properties of the universe, that it was poofed uniquely into existence with a specific purpose, and that the conditions of its existence can no longer apply, means that you have leapt to an unfounded and unusual explanation with no legitimate reason. What the mediocrity principle tells us is that our state is not the product of intent, that the universe lacks both malice and benevolence, but that everything does follow rules — and that grasping those rules should be the goal of science.


SAM HARRIS
Neuroscientist; Chairman, Project Reason; Author, The Moral Landscape

We are Lost in Thought

I invite you to pay attention to anything — the sight of this text, the sensation of breathing, the feeling of your body resting against your chair — for a mere sixty seconds without getting distracted by discursive thought. It sounds simple enough: Just pay attention. The truth, however, is that you will find the task impossible. If the lives of your children depended on it, you could not focus on anything — even the feeling of a knife at your throat — for more than a few seconds, before your awareness would be submerged again by the flow of thought. This forced plunge into unreality is a problem. In fact, it is the problem from which every other problem in human life appears to be made.

I am by no means denying the importance of thinking. Linguistic thought is indispensable to us. It is the basis for planning, explicit learning, moral reasoning, and many other capacities that make us human. Thinking is the substance of every social relationship and cultural institution we have. It is also the foundation of science. But our habitual identification with the flow of thought — that is, our failure to recognize thoughts as thoughts, as transient appearances in consciousness — is a primary source of human suffering and confusion.

Our relationship to our own thinking is strange to the point of paradox, in fact. When we see a person walking down the street talking to himself, we generally assume that he is mentally ill. But we all talk to ourselves continuously — we just have the good sense to keep our mouths shut. Our lives in the present can scarcely be glimpsed through the veil of our discursivity: We tell ourselves what just happened, what almost happened, what should have happened, and what might yet happen. We ceaselessly reiterate our hopes and fears about the future. Rather than simply exist as ourselves, we seem to presume a relationship with ourselves. It's as though we are having a conversation with an imaginary friend possessed of infinite patience. Who are we talking to?

While most of us go through life feeling that we are the thinker of our thoughts and the experiencer of our experience, from the perspective of science we know that this is a distorted view. There is no discrete self or ego lurking like a minotaur in the labyrinth of the brain. There is no region of cortex or pathway of neural processing that occupies a privileged position with respect to our personhood. There is no unchanging "center of narrative gravity" (to use Daniel Dennett's phrase). In subjective terms, however, there seems to be one — to most of us, most of the time.

Our contemplative traditions (Hindu, Buddhist, Christian, Muslim, Jewish, etc.) also suggest, to varying degrees and with greater or lesser precision, that we live in the grip of a cognitive illusion. But the alternative to our captivity is almost always viewed through the lens of religious dogma. A Christian will recite the Lord's Prayer continuously over a weekend, experience a profound sense of clarity and peace, and judge this mental state to be fully corroborative of the doctrine of Christianity; A Hindu will spend an evening singing devotional songs to Krishna, feel suddenly free of his conventional sense of self, and conclude that his chosen deity has showered him with grace; a Sufi will spend hours whirling in circles, pierce the veil of thought for a time, and believe that he has established a direct connection to Allah.

The universality of these phenomena refutes the sectarian claims of any one religion. And, given that contemplatives generally present their experiences of self-transcendence as inseparable from their associated theology, mythology, and metaphysics, it is no surprise that scientists and nonbelievers tend to view their reports as the product of disordered minds, or as exaggerated accounts of far more common mental states — like scientific awe, aesthetic enjoyment, artistic inspiration, etc.

Our religions are clearly false, even if certain classically religious experiences are worth having. If we want to actually understand the mind, and overcome some of the most dangerous and enduring sources of conflict in our world, we must begin thinking about the full spectrum of human experience in the context of science.

But we must first realize that we are lost in thought.


ANTHONY AGUIRRE
Associate Professor of Physics, University of California, Santa Cruz

The Paradox

Paradoxes arise when one or more convincing truths contradiction either each other, clash with other convincing truths, or violate unshakeable intuitions. They are frustrating, yet beguiling. Many see virtue in avoiding, glossing over, or dismissing them. Instead we should seek them out, if we find one sharpen it, push it to the extreme, and hope that the resolution will reveal itself, for with that resolution will invariably come a dose of Truth.

History is replete with examples, and with failed opportunities. One of my favorites is Olber's paradox. Suppose the universe were filled with an eternal roughly uniform distribution of shining stars. Faraway stars would look dim because they take up a tiny angle on the sky; but within that angle they are as bright as the Sun's surface. Yet in an eternal and infinite (or finite but unbounded) space, every direction would lie within the angle taken up by some star. The sky would be alight like the surface of the sun. Thus, a simple glance at the dark night sky reveals that the universe must be dynamic: expanding, or evolving. Astronomers grappled with this paradox for several centuries, devising unworkable schemes for its resolution. Despite at least one correct view (by Edgar Allen Poe!), the implications never really permeated even the small community of people thinking about the fundamental structure of the universe. And so it was that Einstein, when he went to apply his new theory to the universe, sought an eternal and static model that could never make sense, introduced a term into his equations which he called his greatest blunder, and failed to invent the big-bang theory of cosmology.

Nature appears to contradict itself with the utmost rarity, and so a paradox can be opportunity for us to lay bare our cherished assumptions, and discover which of them we must let go. But a good paradox can take us farther, to reveal that the not just the assumptions but the very modes of thinking we employed in creating the paradox must be replaced. Particles and waves? Not truth, just convenient models. The same number of integers as perfect squares of integers? Not crazy, though you might be if you invent cardinality. This sentence is false. And so, says Godel, might be the foundations of any formal system that can refer to itself. The list goes on.

What next? I've got a few big ones I'm wrestling with. How can thermodynamics' second law arise unless cosmological initial conditions are fine-tuned in a way we would never accept in any other theory or explanation of anything? How do we do science if the universe is infinite, and every outcome of every experiment occurs infinitely many times?

What impossibility is nagging at you?


| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >