| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >

Tech Culture Journalist; Partner, Contributor, Co-editor, Boing Boing; Executive Producer, host, Boing Boing Video

Ambient Memory And The Myth Of Neutral Observation

Like others whose early life experiences were punctuated with trauma, my memory has holes. Some of those holes are as wide as years. Others, just big enough to swallow painful incidents that lasted moments, but reverberated for decades.

The brain-record of those experiences sometimes submerges, then resurfaces, sometimes submerging again over time. As I grow older, stronger, and more capable of contending with memory, I become more aware of how different my own internal record may be from others who lived the identical moment.

Each of us commit our experiences to memory and permanence differently. Time and human experience are not linear, nor is there one and only one neutral record of each lived moment. Human beings are impossibly complex tarballs of muscle, blood, bone, breath, and electrical pulses that travel through nerves and neurons; we are bundles of electrical pulses carrying payloads, pings hitting servers. And our identities are inextricably connected to our environments: no story can be told without a setting.

My generation is the last generation of human beings who were born into a pre-internet world, but who matured in tandem with that great, networked hive-mind. In the course of my work online, committing new memories to network mind each day, I have come to understand that our shared memory of events, of truths, of biography, and of fact-- all of this shifts and ebbs and flows, just as our most personal memories do.

Ever-edited Wikipedia replaces paper encyclopedias. The chatter of Twitter eclipses fixed-form and hierarchical communication. The news flow we remember from our childhoods, a single voice of authority on one of three channels, is replaced by something hyper-evolving, chaotic, and less easily defined. Even the formal histories of State may be rewritten by the likes of Wikileaks, and its yet-unlaunched children.

Facts are more fluid than in the days of our grandfathers. In our networked mind, the very act of observation--reporting or tweeting or amplifying some piece of experience--changes the story. The trajectory of information, the velocity of this knowledge on the network, changes the very nature of what is remembered, who remembers it, and for how long it remains part of our shared archive. There are no fixed states.

So must our notion of memory and record evolve.

The history we are creating now is alive. Let us find new ways of recording memory, new ways of telling the story, that reflect life. Let us embrace this infinite complexity as we commit new history to record.

Let us redefine what it means to remember.

Researcher, MIT Media Lab

Information Flow

The concept of cause and effect is better understood as the flow of information between two connected events, from the earlier event to the later one. Saying "A causes B" sounds precise, but is actually very vague. I would specify much more by saying "with the information that A has happened, I can compute with almost total confidence* that B will happen." The latter rules out the possibility that other factors could prevent B even if A does happen, but allows the possibility that other factors could cause B even if A doesn't happen.

As shorthand, we can say that one set of information "specifies" another if the latter can be deduced or computed from the former.  Note that this doesn't only apply to one-bit sets of information, like the occurrence of a specific event. It can also apply to symbolic variables (given the state of the Web, the results you get from a search engine are specified by your query), numeric variables (the number read off a precise thermometer is specified by the temperature of the sensor), or even behavioral variables (the behavior of a computer is specified by the bits loaded in its memory).

But let's take a closer look at the assumptions we're making. Astute readers may have noticed that in one of my examples, I assumed that the entire state of the Web was a constant. How ridiculous! In mathematical parlance, assumptions are known as "priors," and in a certain widespread school of statistical thought, they are considered the most important aspect of any process involving information. What we really want to know is if, given a set of existing priors, adding one piece of information (A) would allow us to update our estimate of the likelihood of another piece of information (B). Of course, this depends on the priors — for instance, if our priors include absolute knowledge of B, then an update will not be possible.

If, for most reasonable sets of priors, information about A would allow us to update our estimate of B, then it would seem there is some sort of causal connection between the two. But the form of the causal connection is unspecified — a principle often called "correlation does not imply causation." The reason for this is that the essence of causation as a concept rests on our tendency to have information about earlier events before we have information about later events. (The full implications of this concept on human consciousness, the second law of thermodynamics, and the nature of time are interesting, but sadly outside the scope of this essay.)

If information about all events always came in the order they occurred, then correlation would indeed imply causation. But, in the real world, not only are we limited to observing events in the past, but we may also discover information about those events out of order.  Thus, the correlations we observe could be reverse causes (information about A allows us to update our estimate of B, although B happened first and thus was the cause of A) or even more complex situations (e.g. information about A allows us to update our estimate of B, but is also giving us information about C, which happened before either A or B and caused both).

Information flow is symmetric: if information about A were to allow us to update our estimate of B, then information about B would allow us to update our estimate of A. But since we cannot change the past or know the future, these constraints are only useful to us when contextualized temporally and arranged in order of occurrence. Information flow is always from the past to the future, but in our minds, some of the arrows may be reversed. Resolving this ambiguity is essentially the problem that science was designed to solve. If you can master the technique of visualizing all information flow and keeping track of your priors, then the full power of the scientific method — and more — is yours to wield from your personal cognitive toolkit.

* In our universe, too many things are interconnected for absolute statements of any kind, so we usually relax our criteria; for instance, "total confidence" might be relaxed from a 0% chance of being wrong to, say, a 1 in 3 quadrillion chance of being wrong — about the chance that, as you finish this sentence, all of humanity will be wiped out by a meteor.

Managing Director, Digital Science, Macmillan Publishers Ltd

The Controlled Experiment

The scientific concept that most people would do well to understand and exploit is the one that almost defines science itself: the controlled experiment.

When required to make a decision, the instinctive response of most non-scientists is to introspect, or perhaps call a meeting. The scientific method dictates that wherever possible we should instead conduct a suitable controlled experiment. The superiority of the latter approach is demonstrated not only by the fact that science has uncovered so much about the world in which we live, but also, and even more powerfully, by the fact that such a lot of it — from the Copernican principle and evolution by natural selection to general relativity and quantum mechanics — is so mind-bendingly counter-intuitive.

Our embrace of truth as defined by experiment (rather than by common sense, or consensus, or seniority, or revelation or any other means) has in effect released us from the constraints of our innate preconceptions, prejudices and lack of imagination. Instead it has freed us to appreciate the universe in terms that are well beyond our abilities to derive by intuition alone.

What a shame then that experiments are, by and large, used only by scientists. What if businesspeople and policy-makers were to spend less time relying on instinct or partially informed debate, and more time devising objectives ways to identify the best answers? I think that would often lead to better decisions.

In some domains this is already starting to happen. Online companies like Amazon and Google don't anguish over how to design their websites. Instead they conduct controlled experiments by showing different versions to different groups of users until they have iterated to an optimal solution. (And with the amount of traffic those sites receive, individual tests can be completed in seconds.) They are helped, of course, by the fact that the web is particularly conducive to rapid data acquisition and product iteration. But they are helped even more by the fact that their leaders often have backgrounds in engineering or science and therefore adopt a scientific — which is to say, experimental — mindset.

Government policies — from teaching methods in schools to prison sentencing to taxation — would also benefit from more use of controlled experiments. This is where many people start to get squeamish. To become the subject of an experiment in something as critical or controversial as our children's education or the incarceration of criminals feels like an affront to our sense of fairness, and our strongly held belief in the right to be treated exactly the same as everybody else. 

After all, if there are separate experimental and control groups then surely one of them must be losing out. Well, no, because we do not know in advance which group will be better off, which is precisely why we are conducting the experiment. Only when a potentially informative experiment is not conducted do true losers arise: all those future generations who stood to benefit from the results. The real reason people are uncomfortable is simply that we're not used to seeing experiments conducted in these domains. After all, we willingly accept them in the much more serious context of clinical trials, which are literally matters of life and death.

Of course, experiments are not a panacea. They will not tell us, for example, whether an accused person is innocent or guilty. Moreover, experimental results are often inconclusive. In such circumstances a scientist can shrug his shoulders say that he is still unsure, but a businessperson or lawmaker will often have no such luxury and may be forced to make a decision anyway. Yet none of this takes away from the fact that the controlled experiment is the best method yet devised to reveal truths about the world, and we should use them wherever they can be sensibly applied.

Independent Theoretical Physicist

Uncalculated Risk

We humans are terrible at dealing with probability. We are not merely bad at it, but seem hardwired to be incompetent, in spite of the fact that we encounter innumerable circumstances every day which depend on accurate probabilistic calculations for our wellbeing. This incompetence is reflected in our language, in which the common words used to convey likelihood are "probably" and "usually" — vaguely implying a 50% to 100% chance. Going beyond crude expression requires awkwardly geeky phrasing, such as "with 70% certainty," likely only to raise the eyebrow of a casual listener bemused by the unexpected precision. This blind spot in our collective consciousness — the inability to deal with probability — may seem insignificant, but it has dire practical consequences. We are afraid of the wrong things, and we are making bad decisions.

Imagine the typical emotional reaction to seeing a spider: fear, ranging from minor trepidation to terror. But what is the likelihood of dying from a spider bite? Fewer than four people a year (on average) die from spider bites, establishing the expected risk of death-by-spider at lower than one in a hundred million. This risk is so minuscule that it is actually counterproductive to worry about it! Millions of people die each year from stress-related illnesses.

The startling implication is that the risk of being bitten and killed by a spider is less than the risk that being afraid of spiders will kill you from increased stress. Our irrational fears and inclinations are costly. The typical reaction to seeing a sugary donut is the desire to consume it. But, given the potential negative impact of that donut, including the increased risk of heart disease and reduction in overall health, our reaction should rationally be one of fear and revulsion. It may seem absurd to fear a donut — or, even more dangerous, a cigarette — but this reaction rationally reflects the potential negative impact on our lives.

We are especially ill-equipped to manage risk when dealing with small likelihoods of major events. This is evidenced by the success of lotteries and casinos at taking peoples' money, but there are many other examples. The likelihood of being killed by terrorism is extremely low, yet we have instituted actions to counter terrorism that significantly reduce our quality of life. As a recent example, x-ray body scanners could increase the risk of cancer to a degree greater than the risk from terrorism — the same sort of counterproductive overreaction as the one to spiders. This does not imply we should let spiders, or terrorists, crawl all over us — but the risks need to be managed rationally.

Socially, the act of expressing uncertainty is a display of weakness. But our lives are awash in uncertainty, and rational consideration of contingencies and likelihoods is the only sound basis for good decisions. As another example, a federal judge recently issued an injunction blocking stem cell research funding. The shallowly viewed implication is that some scientists won't be getting money; but what is really at stake is much more important. The probability that stem cell research will quickly lead to life saving medicine is low, but, if successful, the positive impact could be huge. If one considers outcomes and approximates the probabilities, the conclusion is that the judge's decision destroyed the lives of thousands of people, based on probabilistic expectation.

How do we make rational decisions based on contingencies? That judge didn't actually cause thousands of people to die... or did he? If we follow the "many worlds" interpretation of quantum physics — the most direct interpretation of its mathematical description — then our universe is continually branching into all possible contingencies, and there is a world in which stem cell research saves millions of lives, and another world in which people die because of the judge's decision. Using the "frequentist" method of calculating probability, we have to add the probabilities of the worlds in which an event occurs to obtain the probability of that event.

Quantum mechanics dictates that the world we experience will happen according to this probability — the likelihood of the event. In this bizarre way, quantum mechanics reconciles the frequentist and "Bayesian" points of view, equating the frequency of an event over many possible worlds with its likelihood. An "expectation value," such as the expected number of people killed by the judge's decision, is the number of people killed in the various contingencies, weighted by their probabilities. This expected value is not necessarily likely to happen, but is the weighted average of the expected outcomes — useful information when making decisions. In order to make good decisions about risk we need to become better at these mental gymnastics, improve our language, and retrain our intuition.

Perhaps the best arena for honing our skills and making precise probabilistic assessments would be a betting market — an open site for betting on the outcomes of many quantifiable and socially significant events. In making good bets, all the tools and shorthand abstractions of Bayesian inference come into play — translating directly to the ability to make good decisions. With these skills, the risks we face in everyday life would become clearer, and we would develop more rational intuitive responses to uncalculated risks, based on collective rational assessment and social conditioning.

We might get over our excessive fear of spiders, and develop a healthy aversion to donuts, cigarettes, television, and stressful full-time employment. We would become more aware of the low cost compared to probable rewards of research, including research into improving the quality and duration of human life. And, more subtly, as we became more aware and apprehensive of ubiquitous vague language, such as "probably" and "usually," our standards of probabilistic description would improve.

Making good decisions requires concentrated mental effort; and if we overdo it we run the risk of being counterproductive through increased stress and wasted time. So it's best to balance, and play, and take healthy risks — as the greatest risk is that we'll get to the end of our lives having never risked them on anything.

Planetary Scientist

The Gibbs Landscape

Biology is rarely wasteful. Sure, on the individual organism level there is plenty of waste involved with reproduction and other activities (think of all the fruit on a tree or the millions of sperm that loose out in the race to the egg). But on the ecosystem level one bug's trash is another bug's treasure - provided that some useful energy can still be extracted by reacting that trash with something else in the environment. The food chain is not a simple linear staircase of predator-prey relationships; it is a complex fabric of organisms large, small, and microscopic interacting with each other and with the environment to tap every possible energetic niche.

Geobiologists and astrobiologists can measure and map this energy - referred to as the Gibbs free energy. Doing so is useful for assessing the energetic limits of life on Earth and for assessing potentially habitable regions on other worlds. In an ecosystem Gibbs free energy - named for it's discoverer, the late 19th century scientist J. Willard Gibbs - is the energy in a biochemical reaction that is available to do work. It's the energy left over after producing some requisite waste heat and a dollop or two of entropy. This energy to do work is harnessed by biological systems for activities like making repairs, growing, and reproducing. For a given metabolic pathway used by life, e.g. reacting carbohydrates with oxygen, we can measure how many Joules are available to do work per mole of reactants. Humans and essentially all the animals you know and love typically harness a couple thousand kiloJoules per mole by burning food with oxygen. Microbes have figured out all sorts of ways to harness the Gibbs free energy by combining various gases, liquids, and rocks. Measurements by Tori Hoehler and colleagues at NASA Ames Research Center on methane-generating and sulfate-eating microbes indicate that the limit for life may be about 10 kiloJoules per mole. Within a given environment there may be many chemical pathways in operation and if there is an open energetic niche, chances are life will find a way to fill it. Biological ecosystems can be mapped as a landscape of reactions and pathways for harnessing energy; this is the Gibbs landscape.

Civilizations and the rise of industrial and technological ecosystems bring a new challenge to our understanding of the dynamic between energy needs and energy resources. The Gibbs landscape provides a short-hand abstraction for conceptualizing this dynamic. We can imagine any given city, country, or continent overlain with a map of energy available to do work. This includes, but extends beyond the chemical energy framework used in the context of biological ecosystems. For instance, automobiles with internal combustion engines metabolize gasoline with air. Buildings eat the electricity supplied by power plants or rooftop solar panels. Every component in modern industrial society occupies some niche in the landscape.

But importantly, many of the Gibbs landscapes in place today are ripe with unoccupied niches. The systems we have designed and built are inefficient and incomplete in the utilization of energy to do the work of civilization's ecosystems. Much of what we have designed excels at producing waste heat with little concern for optimizing work output. From lights that remain on all night to landfills that contain discarded resources, the Gibbs landscapes of today offer much room for technological innovation and evolution. The Gibbs landscape also provides a way for visualizing untapped capacity to do work — wind, solar, hydroelectric, tides and geothermal, these are just a few of the layers. Taken together, all of these layers show us where and how we can work to close the loops and connect the dangling threads of our nascent technological civilization.

When you start to view the world around you with Gibbsian eyes you see the untapped potential in so many of our modern technological and industrial ecosystems. It's disturbing at first because we've done such a poor job, but the marriage between civilization and technology is young. The landscape provides much reason for hope as we continue to innovate and strive to reach the balance and continuity that has served complex biological ecosystems so well for billions of years on Earth.

Director, Institute of Philosophy School of Advanced Study University of London; writer and presenter, BBC World Service series "The Mysteries of the Brain"

The Senses and the Multi-Sensory

For far too long we have laboured under a faulty conception of the senses. Ask anyone you know how many senses we have and they will probably say five; unless they start talking to you about a sixth sense. But why pick five? What of the sense of balance provided by the vestibular system, telling you whether you are going up or down in a lift, forwards or backwards on a train, or side to side on a boat? What about proprioception that gives you a firm sense of where your limbs are when you close your eyes? What about feeling pain, hot and cold? Are these just part of touch, like feeling velvet or silk? And why think of sensory experiences like seeing, hearing, tasting, touching and smelling as being produced by a single sense?

Contemporary neuroscientists have postulated two visual systems — one responsible for how things look to us, the other for controlling action — that operate independently of one another. The eye may fall for visual illusions but the hand does not, reaching smoothly for a shape that looks larger than it is to the observer.

And it doesn't stop here. There is good reason to think that we have two senses of smell: an external sense of smell, orthonasal olfaction, produced by inhaling, that enables us to detect things in the environment such food, predators or smoke; and internal sense, retronasal olfaction, produced by exhaling, that enables us to detect the quality of what we have just eaten, allowing us to decide whether we want any more or should expel it.

Associated with each sense of smell is a distinct hedonic response. Orthonasal olfaction gives rise to the pleasure of anticipation. Retronasal olfaction gives rise to the pleasure of reward. Anticipation is not always matched by reward. Have you ever noticed how the enticing aromas of freshly brewed coffee are never quite matched by the taste? There is always a little disappointment. Interestingly, the one food where the intensity of orthonsally and retronasally judged aromas match perfectly is chocolate. We get just what we expected, which may explain why chocolate is such a powerful stimulus.

Besides the proliferation of the senses in contemporary neuroscience, another major change is taking place. We used to study the senses in isolation, with the greatest majority of researchers focusing on vision. Things are rapidly changing. We now know that the senses do not operate in isolation, but combine at both early and late stages of processing to produce our rich perceptual experiences of our surroundings. It is almost never the case that our experience presents us with just sights or sounds. We are always enjoying conscious experiences made up of sights and sounds, smells, the feel of our body, the taste in our mouths; and yet these are not presented as separate sensory parcels. We simply take in the rich and complex scene without giving much thought to how the different contributors produce the whole experience.

We give little thought to how smell provides a background to every conscious waking moment. People who lose their sense of smell can be plunged into depression and show less sign of recovery a year later than people who lose their sight. This is because familiar places no longer smell the same, and people no longer have their reassuring olfactory signature. Also, patients who lose their smell believe they have lost their sense of taste. When tested, they acknowledge that that can taste sweet, sour, salt, bitter savoury, and metallic. But everything else, missing from the taste of what they are eating, is due to retronasal smell.

What we call taste is one of the most fascinating case studies for how inaccurate our view of our senses is: it is not produced by the tongue alone but is always an amalgam of taste, touch and smell. Touch contributes to sauces tasting creamy, and other foods tasting chewy, crisp, or stale. The only difference between potato chips, which "taste" fresh or stale, is a difference in texture. The largest part of what we call "taste" is in fact smell in the form of retronasal olfaction, which is why people who lose their ability to smell say they can no longer taste anything. Taste, touch and smell are not merely combined to produce experiences of foods of liquids, rather the information from the separate sensory channels is fused into a unified experience of that we call taste and food scientists call flavour.

Flavour perception is the result of multi-sensory integration of gustatory, olfactory and oral somatosenory information into a single experience whose components we are unable to distinguish. It is one of the most multi-sensory experiences we have and can be influenced by both sight and sound. The colours of wines and the sounds food make when we bite or chew them can have large impacts on our resulting appreciation and assessment, and irritation of the trigeminal nerve in the face will make chillies feel "hot" and menthol feel "cool" in the mouth without any actual change in temperature.

In sensory perception, multi-sensory integration is the rule not the exception. In audition, we don't just hear with our ears, we use our eyes to locate the apparent sources of sounds in the cinema where we "hear" the voices coming from the actors' mouths on the screen although the sounds are coming from the sides of the theatre. This is known as the ventriloquism effect. Similarly, retronasal odours detected by olfactory receptors in the nose are experienced as tastes in the mouth. The sensations get re-located to the mouth because oral sensations of chewing or swallowing capture our attention, making us think these olfactory experiences are occurring in the same place.

Other surprising collaboration among the senses are due to cross-modal effects, where stimulation of one sense boosts activity in another. Looking at someone's lips across a crowded room can improve our ability to hear what they are saying, and the smell of vanilla can make a liquid we sip "taste" sweeter, and less sour. This is why we say vanilla is sweet smelling, although sweet is a taste, and pure vanilla is not sweet at all. Industrial manufacturers know about these effects and exploit them. Certain aromas in shampoos, for example, can make the hair "feel" softer; and red coloured drinks "taste" sweet, while drinks with a light green colour "taste" sour. In many of these interactions vision will dominate; but not in every case

. For anyone unlucky enough to have disturbance in their vestibular system they will feel the world is spinning although cues from the eyes and the body should be telling them everything is still. Instead, the brain goes with the combined picture and vision and proprioception fall in line. Luckily, our senses cooperate and we get us around the world, and the world we inhabit is not a sensory, but a multisensory world.

Psychologis; Director of the Berger Institute for Work, Family, and Children at Claremont McKenna College

A Statistically Significant Difference in Understanding the Scientific Process

Statistically significant difference — It is a simple phrase that is essential to science and that has become common parlance among educated adults. These three words convey a basic understanding of the scientific process, random events, and the laws of probability. The term appears almost everywhere that research is discussed — in newspaper articles, advertisements for "miracle" diets, research publications, and student laboratory reports, to name just a few of the many diverse contexts where the term is used. It is a short hand abstraction for a sequence of events that includes an experiment (or other research design), the specification of a null and alternative hypothesis, (numerical) data collection, statistical analysis, and the probability of an unlikely outcome. That is a lot of science conveyed in a few words.

It would be difficult to understand the outcome from any research without at least a rudimentary understanding of what is meant by the conclusion that the researchers found or did not find evidence of a "statistically significant difference." Unfortunately, the old saying that "a little knowledge is a dangerous thing" applies to the partial understanding of this term. One problem is that "significant" has a different meaning when used in everyday speech than when used to report research findings.

Most of the time, the word "significant" means that something important happened. For example, if a physician told you that you would feel significantly better following surgery, you would correctly infer that your pain would be reduced by a meaningful amount—you would feel less pain. But, when used in "statistically significant difference," the term "significant" means that the results are unlikely to be due to chance (if the null hypothesis were true); the results may or may not be important. In addition, sometimes, the conclusion will be wrong because researcher can only assert their conclusion at some level of probability. "Statistically significant difference" is a core concept in research and statistics, but as anyone who was taught undergraduate statistics or research methods can tell you, it is not an intuitive idea.

Despite the fact that "statistically significant difference" communicates a cluster of ideas that are essential to the scientific process, there are many pundits who would like to see it removed from our vocabulary because it is frequently misunderstood. Its use underscores the marriage of science and probability theory, and despite its popularity, or perhaps because of it, some experts have called for a divorce because the term implies something that it does not, and the public is often misled. In fact, experts are often misled as well. Consider this hypothetical example: In a well-done study that compares the effectiveness of two drugs relative to a placebo, it is possible that Drug X is statistically significantly different from a placebo and Drug Y is not, yet Drugs X and Y might not be statistically significant different from each other. This could result when Drug X is statistically different from placebo at a probability level of p < .04, but Drug Y is statistically significantly different from a placebo only at a probability level of p < .06, which is higher than most a priori levels used to test for statistical significance. If just reading about this makes your head hurt, you are among the masses who believe they understand this critical shorthand phrase which is at the heart of the scientific method, but actually may have a shallow-level of understanding.

There are many critically important ways that findings of "statistically significant difference" can be misleading. But, even though there are real problems with understanding this term, it is firmly entrenched in everyday discussions of research, and for the general public, it shows some knowledge of the process of science.

A better understanding of the pitfalls associated with this term would go a long way toward improving our "cognitive toolkits." If common knowledge of what this term means included the ideas that a) the findings may not be important and b) conclusions based on finding or failure to find statistically significant differences may be wrong, then we would have significantly advanced general knowledge. When people read or use the term "statistically significant difference," it is an affirmation of the scientific process, which, for all of its limitations and misunderstandings, is a significant advance over alternative ways of knowing about the world. If we could just add two more key concepts to the meaning of that phrase, we could improve how the general public thinks about science.

Professor of Medicine, University of California, San Diego 

The Dece(i)bo Effect

The Dece(i)bo Effect — think portmanteau of Deceive and Placebo — refers to the facile application of constructs, without unpackaging the concept and the assumptions on which it relies, in a fashion that, rather than benefiting thinking, leads reasoning astray.  

Words and phrases enter common parlance, that capture a concept: Occam's razor, placebo, Hawthorne effect. Such phrases and code-words in principle facilitate discourse — and can indeed do so. Deploying the word or catchphrase adds efficiency to the interchange, by obviating the need for pesky review of the principles and assumptions encapsulated in the word.

Unfortunately, bypassing the need to articulate the conditions and assumptions on which validity of the construct rests, may lead to bypassing consideration of whether these conditions and assumptions legitimately apply. Use of the term can then, far from fostering sound discourse, serve to undermine it.

Take, for example, the "placebo," and "placebo effects." Unpackaging the terms, a "placebo" is in principle something that is physiologically "inert" — but believed by the recipient to be active, or possibly so. The term "placebo effect" refers to improvement of a condition when persons have been placed on a placebo, due to effects of expectation/suggestion.

With these terms well ensconced in the vernacular, Dece(i)bo Effects associated with them are much in evidence.  Key presumptions regarding placebos and placebo effects are more typically wrong than not.

1. When hearing the word "placebo," scientists often presume "inert" - without stopping to ask: what is that allegedly physiologically inert substance? Indeed, even in principle, what could it be??

There isn't anything known to be physiologically inert. There are no regulations about what constitute placebos; and their composition — commonly determined by the manufacturer of the drug under study — is typically undisclosed.  Among the uncommon cases where placebo composition has been noted, there are documented instances in which the placebo composition apparently produced spurious effects. Two studies used corn oil and olive oil placebos for cholesterol-lowering drugs: one noted that the "unexpectedly" low rate of heart attacks in the control group may have contributed to failure to see a benefit from the cholesterol drug. Another study noted "unexpected" benefit of a drug to gastrointestinal symptoms in cancer patients. But cancer patients bear increased likelihood of lactose intolerance — and the placebo was lactose, a "sugar pill." When the term "placebo" substitutes for actual ingredients, any thinking about how the composition of the control agent may have influenced the study is circumvented.

2. Because there are many settings in which persons with a problem, given placebo, report sizeable improvement on average when they are re-queried (see 3), many scientists have accepted that "placebo effects" — of suggestion — are both large in magnitude and widespread in the scope of what they benefit.

The Danish researcher Asbjørn Hróbjartsson conducted a systematic review of studies that compared a placebo to no treatment. He found that the placebo generally does: nothing. In most instances, there is no placebo effect. Mild "placebo effects" are seen, in the short term, for pain and anxiety. Placebo effects for pain are reported to be blocked by naloxone, an opiate antagonist — specifically implicating endogenous opiates in pain placebo effects, which would not be expected to benefit every possible outcome that might be measured.

3. When hearing that persons with a problem placed on a "placebo" report improvement, scientists commonly presume this must be due to the "placebo effect" - the effect of expectation/suggestion.

However, the effects are usually something else entirely. For instance: natural history of the disease, and regression to the mean. Consider a distribution, such as a bell-shape. Whether the outcome of interest is pain, blood pressure, cholesterol, or other, persons are classically selected for treatment if they are at one end of the distribution - say, the high end. But these outcomes are quantities that vary (for instance from physiological variation, natural history, measurement error...), and on average the high values will vary back down — a phenomenon termed "regression to the mean" that operates, placebo or no. (Hence, Hróbjartsson's findings.)

A different dece(i)bo problem beset Ted Kaptchuk's recent Harvard study in which researchers gave a "placebo," or nothing, to people afflicted with irritable bowel syndrome. They administered the placebo in a bottle boldly labeled "Placebo," and advised patients they were receiving placebos, which were known to be potent. The thesis was that one might harness the effects of expectation honestly, without deception, by telling subjects how powerful placebos in fact were - and by developing a close relationship with subjects. Researchers met repeatedly with subjects, gained subjects' appreciation for their concern and listening (as the researchers made clear), and repeatedly told subjects that placebos are powerful. Those placed on placebo obliged the researchers by telling them they had gotten better, moreso than those on nothing. The scientists attributed this to a placebo effect.

But what's to say patients weren't simply telling the scientists what they thought the scientists wished to hear? Such desire to please (a form, perhaps, of "social approval" reporting bias) had fertile grounds in which to operate and create what was interpreted as a placebo effect — which implies actual subjective benefit to symptoms. One wonders if so great an error of presumption would operate were there not an existing term, "placebo effect," to signify the interpretation the Harvard group chose.

Another explanation consistent with these results is specific physiological benefit. The study used a nonabsorbed fiber — microcrystalline cellulose — as the "Placebo" that subjects were told would be effective. The authors are applauded for disclosing its composition. But other nonabsorbed fibers benefit both constipation and diarrhea — symptoms of irritable bowel — and are prescribed for that purpose; psyllium is an example. Thus, specific physiological benefit of the "Placebo" to symptoms cannot be excluded.

Together these points illustrate that the term "placebo" cannot be presumed to imply "inert" (and generally does not); and that when studies see large benefit to symptoms in patients treated with "placebo" (expected from distribution considerations alone), one cannot infer these arose from large benefits of suggestion to symptoms (which evidence indicates may seldom operate).

Thus, rather than facilitating sound reasoning, evidence suggests that in many cases, including high stakes settings in which inferences may propagate to medical practice, substitution of a term — here, "placebo," "placebo effect" — for the concepts they are intended to convey, may actually thwart or bypass critical thinking about key issues, with implications to fundamental concerns for us all.

Stuart Firestein
Neuroscientist, Columbia Universtiy

The Name Game

Too often in science we operate under the principle that "to name it is to tame it", or so we think. One of the easiest mistakes, even among working scientists, is to believe that labeling something has somehow or another added to an explanation or understanding of it. Worse than that we use it all the time when we are teaching, leading students to believe that a phenomenon that is named is a phenomenon that is known, and that to know the name is to know the phenomenon. It's what I, and others, have called the nominal fallacy. In biology especially, we have labels for everything - from molecules to anatomical parts, to physiological functions, to organisms, to ideas or hypotheses. The nominal fallacy is the error of believing that the label carries explanatory information.

An instance of the nominal fallacy is most easily seen when the meaning or importance of a term or concept shrinks with knowledge. One example of this would be the word "instinct". Instinct refers to a set of behaviors whose actual cause we don't know or simply don't understand or have access to; and therefore we call them instinctual, inborn, innate. Often this is the end of the exploration of these behaviors, they are the nature part of the nature-nurture argument (a term that itself is likely a product of the nominal fallacy) and therefore can't be broken down or reduced any further. But experience has shown that this is rarely the truth. In one of the great examples of this, it was for quite some time thought that when chickens hatched and they immediately began pecking the ground for food, this behavior must have been instinctive. In the 1920s a Chinese researcher named Zing-Yang Kuo made a remarkable set of observations on the developing chick egg that overturned this idea — and many similar ones. Using a technique of elegant simplicity he found that rubbing heated Vaseline on a chicken egg caused it to become transparent enough to see the embryo inside without disturbing it. In this way he was able to make detailed observations of the development of the embryo from fertilization to hatching. One of his observations was that, in order for the growing embryo to fit properly in the egg, the neck is bent over the chest of the body in such a way that the head rests upon the chest just where the developing heart is encased. As the heart begins beating the head of the chicken is moved in an up-and-down manner that precisely mimics the movement that will be used later for pecking the ground. Thus the "innate" pecking behavior that the chicken appears to know miraculously upon birth has, in fact, been practiced for more than a week within the egg.

In medicine as well, physicians often find technical terms that lead patients to believe that more is known about pathology than may actually be the case. In Parkinson's patients we notice that they have an altered gait and in general that their movement's are slower. Physicians call this "bradykinesia", but it doesn't really tell you anymore than simply saying "they move slower".

Why do they move slower, what is the pathology and what is the mechanism for this slowed movement - these are the deeper questions hidden by the simple statement that "a cardinal symptom of Parkinson's is bradykinesia", satisfying though it might be to say the word to a patient's family.

In science the one critical issue is to be able to distinguish between what we know and what we don't know. This is often difficult enough as things that seem known, sometimes become unknown or at least more ambiguous. When is it time to quit doing an experiment because we now know something, when is it time to stop spending money and resources on a particular line of investigation because the facts are known? This line between the known and the unknown is already difficult enough to define, but the nominal fallacy often obscures it needlessly. Even words like gravity, which seems so well-settled, may lend more of an aura to the idea than it deserves. After all, the apparently very well settled ideas of Newtonian gravity were almost completely undone after 400 years by Einstein's General Relativity. And still today physicists do not have a clear understanding of what gravity is or where it comes from, even though it's effects can be described quite accurately.

Another facet of the nominal fallacy is the danger of using common words and giving them a scientific meaning. This has the often disastrous effect of leading an unwary public down a path of misunderstanding. Words like 'theory', 'law', 'force', do not mean in common discourse what they mean to a scientist. 'Success' in Darwinian evolution is not the same 'success' as taught by Dale Carnegie. Force to a physicist has a meaning quite different from that used in political discourse. The worst of these though may be "theory" and "law" which are almost polar opposites — theory being a strong idea in science while vague in common discourse, and law being a much more muscular social than scientific concept. These differences lead to sometimes serious misunderstandings between scientists and the public who supports their work.

Of course language is critical and we must have names for things to talk about them. But the power of language to direct thought should never be taken lightly and the dangers of the name game deserve our respect.

Journalist; Environmentalist; Writer, New York Times "Dot Earth" blog; Author, The North Pole was Here

To sustain progress on a finite planet that is increasingly under human sway, but also full of surprises, what is needed is a strong dose of anthropophilia. I propose this word as shorthand for a rigorous and dispassionate kind of self regard, even self appreciation, to be employed when individuals or communities face consequential decisions attended by substantial uncertainty and polarizing disagreement.
The term is an intentional echo of Ed Wilson's valuable effort to nurture biophilia, the part of humanness that values and cares for the facets of the non-human world we call nature. What's been missing too long is an effort to fully consider, even embrace, the human role within nature and — perhaps more important still — to consider our own inner nature, as well.
Historically, many efforts to propel a durable human approach to advancement were shaped around two organizing ideas: "woe is me" and "shame on us," with a good dose of "shame on you" thrown in.
The problem?
Woe is paralytic, while blame is both divisive and often misses the real target. (Who's the bad guy, BP or those of us who drive and heat with oil?)
Discourse framed around those concepts too often produces policy debates that someone once described to me, in the context of climate, as "blah, blah, blah bang." The same phenomenon can as easily be seen in the unheeded warnings leading to the most recent financial implosion and the attack on the World Trade Center.
More fully considering our nature — both the "divine and felonious" sides, as Bill Bryson has summed us up — could help identify certain kinds of challenges that we know we'll tend to get wrong.
The simple act of recognizing such tendencies could help refine how choices are made — at least giving slightly better odds of getting things a little less wrong the next time.  At the personal level, I know when I cruise into the kitchen tonight I'll tend to prefer to reach for a cookie instead of an apple. By pre-considering that trait, I might have a slightly better chance of avoiding a couple of hundred unnecessary calories.
Here are a few instances where this concept is relevant on larger scales.
There's a persistent human pattern of not taking broad lessons from localized disasters. When China's Sichuan province was rocked by a severe earthquake, tens of thousands of students (and their teachers) died in collapsed schools. Yet the American state of Oregon, where more than a thousand schools are already known to be similarly vulnerable when the great Cascadia fault off the Northwest Coast next heaves, still lags terribly in speeding investments in retrofitting.
Sociologists understand with quite a bit of empirical backing why this disconnect exists even though the example was horrifying and the risk in Oregon is about as clear as any scientific assessment can be. But does that knowledge of human biases toward the "near and now" get taken seriously in the realms where policies are shaped and the money to carry them out is authorized? Rarely, it seems.
Social scientists also know, with decent rigor, that the fight over human-driven global warming — both over the science and policy choices — is largely cultural. As in many other disputes (consider health care) the battle is between two quite fundamental subsets of human communities — communitarians (aka, liberals) and individualists (aka, libertarians). In such situations, a compelling body of research has emerged showing how information is fairly meaningless. Each group selects information to reinforce a position and there are scant instances where information ends up shifting a position.
That's why no one should expect the next review of climate science from the Intergovernmental Panel on Climate Change to suddenly create a harmonious path forward.
The more such realities are recognized, the more likely it is that innovative approaches to negotiation can build from the middle, instead of arguing endlessly from the edge. The same body of research on climate attitudes, for example, shows far less disagreement on the need for advancing the world's limited menu of affordable energy choices.
Murray Gell-Mann has spoken often of the need, when faced with multi-dimensional problems, to take a "crude look at the whole" — a process he has even given an acronym, CLAW. It's imperative, where possible, for that look to include an honest analysis of the species doing the looking, as well.
There will never be a way to invent a replacement for, say, the United Nations or the House of Representatives. But there is a ripe opportunity to try new approaches to constructive discourse and problem solving, with the first step being an acceptance of our humanness, for better and worse.
That's anthropophilia.

| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >