| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >

Physicist, MIT; Researcher, Precision Cosmology; Scientific Director, Foundational Questions Institute

Scientific Concept

I think the scientific concept that would improve everybody's cognitive toolkit the most is "scientific concept".

Despite spectacular success in research, I feel that our global scientific community has been nothing short of a spectacular failure when it comes to educating the public. Haitians burned 12 "witches" in 2010. In the US, recent polls show that 39% consider astrology scientific, and 40% believe that our human species is less than 10,000 years old. If everyone understood the concept of "scientific concept", these percentages would be zero.

Moreover, the world would be a better place, since people with a scientific lifestyle, basing their decisions on correct information, maximize their chances of success. By making rational buying and voting decisions, they also strengthen the scientific approach to decision-making in companies, organizations and governments.

Why have we scientists failed so miserably? I think the answers lie mainly in psychology, sociology and economics.

A scientific lifestyle requires a scientific approach to both gatherin information and using information, and both have their pitfalls.

You're clearly more likely to make the right choice if you're aware of the full spectrum of arguments before making your mind up, yet there are many reasons why people don't get such complete information. Many lack access to it (3% of Afghans have internet, and in a 2010 poll, 92% didn't know about the 9/11 attacks). 

Many are too swamped with obligations and distractions to seek it. Many seek information only from sources that confirm their preconceptions. The most valuable information can be hard to find even for those who are online and uncensored, buried in an unscientific media avalanche.

Then there's what we do with the information we have. The core of a scientific lifestyle is to change your mind when faced with information that disagrees with your views, avoiding intellectual inertia, yet many laud leaders stubbornly sticking to their views as "strong". The great physicist Richard Feynman hailed "distrust of experts" as a cornerstone of science, yet herd mentality and blind faith in authority figures is widespread. Logic forms the basis of scientific reasoning, yet wishful thinking, irrational fears and other cognitive biases often dominate decisions.

So what can we do to promote a scientific lifestyle?

The obvious answer is improving education. In some countries, having even the most rudimentary education would be a major improvement (less than half of all Pakistanis can read). By undercutting fundamentalism and intolerance, it would curtail violence and war.

By empowering women, it would curb poverty and the population explosion. However, even countries that offer everybody education can make major improvements.

 All too often, schools resemble museums, reflecting the past rather than shaping the future. The curriculum should shift from one watered down by consensus and lobbying to skills our century needs, for relationships, health, contraception, time management, critical thinking and recognizing propaganda. For youngsters, learning a global language and typing should trump long division and writing cursive. In the internet age, my own role as a classroom teacher has changed. I'm no longer needed as a conduit of information, which my students can simply download on their own. Rather, my key role is inspiring a scientific lifestyle, curiosity and desire to learn more.

Now let's get to the most interesting question: how can we really make a scientific lifestyle take root and flourish? 

Reasonable people have been making similar arguments for better education since long before I was in diapers, yet rather than improving, education and adherence to a scientific lifestyle is arguably deteriorating further in many countries, including the US. Why? Clearly because there are powerful forces pushing back in the opposite direction, and they are pushing more effectively. Corporations concerned that a better understanding of certain scientific issues would harm their profits have an incentive to muddy the waters, as do fringe religious groups concerned that questioning their pseudoscientific claims would erode their power.

So what can we do? The first thing we scientists need to do is get off our high horses, admit that our persuasive strategies have failed, and develop a better strategy. We have the advantage of having the better arguments, but the anti-scientific coalition has the advantage of better funding.

However, and this is painfully ironic, it is also more scientifically organized! If a company wants to change public opinion to increase their profits, it deploys scientific and highly effective marketing tools. What do people believe today? What do we want them to believe tomorrow? Which of their fears,  insecurities, hopes and other emotions can we take advantage of? What's the most cost-effective way of changing their mind? Plan a campaign. Launch. Done.

Is the message oversimplified or misleading? Does it unfairly discredit the competition? That's par for the course when marketing the latest smartphone or cigarette, so it would be naive to think that the code of conduct should be any different when this coalition fights science.

Yet we scientists are often painfully naive, deluding ourselves that just because we think we have the moral high ground, we can somehow defeat this corporate-fundamentalist coalition by using obsolete unscientific strategies. Based of what scientific argument wil it make a hoot of a difference if we grumble "we won't stoop that low" and "people need to change" in faculty lunch rooms and recite statistics to journalists? 

We scientists have basically been saying "tanks are unethical, so let's fight tanks with swords".

To teach people what a scientific concept is and how a scientific lifestyle will improve their lives, we need to go about it scientifically:

We need new science advocacy organizations which use all the same scientific marketing and fundraising tools as the anti-scientific coalition.
We'll need to use many of the tools that make scientists cringe, from ads and lobbying to focus groups that identify the most effective sound bites.
We won't need to stoop all the way down to intellectual dishonesty, however. Because in this battle, we have the most powerful weapon of all on our side: the facts.

Assistant Professor, Brain, Behavior, and Cognition; Social Psychology; Northwestern University

Diversity is Universal
At every level in the vast and dynamic world of living things lies diversity.  From biomes to biomarkers, the complex array of solutions to the most basic problems regarding survival in a given environment afforded to us by nature is riveting. In the world of humans alone, diversity is apparent in the genome, in the brain and in our behavior. 

The mark of multiple populations lies in the fabric of our DNA. The signature of selfhood in the brain holds dual frames, one for thinking about one's self as absolute, the other in context of others. From this biological diversity in humans arises cultural diversity directly observable in nearly every aspect of how people think, feel and behavior. From classrooms to conventions across continents, the range and scope of human activities is stunning. 

Recent centuries have seen the scientific debate regarding the nature of human nature cast as a dichotomy between diversity on the one hand and universalism on the other. Yet a seemingly paradoxical, but tractable, scientific concept that may enhance our cognitive toolkit over time is the simple notion that diversity is universal.

Neuroscientist, Baylor College of Medicine; Author, Incognito and Sum

The Umwelt

In 1909, the biologist Jakob von Uexküll introduced the concept of the umwelt. He wanted a word to express a simple (but often overlooked) observation: different animals in the same ecosystem pick up on different environmental signals. In the blind and deaf world of the tick, the important signals are temperature and the odor of butyric acid. For the black ghost knifefish, it's electrical fields. For the echolocating bat, it's air-compression waves. The small subset of the world that an animal is able to detect is its umwelt. The bigger reality, whatever that might mean, is called the umgebung.

The interesting part is that each organism presumably assumes its umwelt to be the entire objective reality "out there." Why would any of us stop to think that there is more beyond what we can sense? In the movie The Truman Show, the eponymous Truman lives in a world completely constructed around him by an intrepid television producer. At one point an interviewer asks the producer, "Why do you think Truman has never come close to discovering the true nature of his world?" The producer replies, "We accept the reality of the world with which we're presented." We accept our umwelt and stop there.

To appreciate the amount that goes undetected in our lives, imagine you're a bloodhound dog. Your long nose houses two hundred million scent receptors. On the outside, your wet nostrils attract and trap scent molecules. The slits at the corners of each nostril flare out to allow more air flow as you sniff. Even your floppy ears drag along the ground and kick up scent molecules. Your world is all about olfaction. One afternoon, as you're following your master, you stop in your tracks with a revelation. What is it like to have the pitiful, impoverished nose of a human being? What can humans possibly detect when they take in a feeble little noseful of air? Do they suffer a hole where smell is supposed to be?

Obviously, we suffer no absence of smell because we accept reality as it's presented to us. Without the olfactory capabilities of a bloodhound, it rarely strikes us that things could be different. Similarly, until a child learns in school that honeybees enjoy ultraviolet signals and rattlesnakes employ infrared, it does not strike her that plenty of information is riding on channels to which we have no natural access. From my informal surveys, it is very uncommon knowledge that the part of the electromagnetic spectrum that is visible to us is less than a ten-trillionth of it.

Our unawareness of the limits of our umwelt can be seen with color blind people: until they learn that others can see hues they cannot, the thought of extra colors does not hit their radar screen. And the same goes for the congenitally blind: being sightless is not like experiencing "blackness" or "a dark hole" where vision should be. As a human is to a bloodhound dog, a blind person does not miss vision. They do not conceive of it. Electromagnetic radiation is simply not part of their umwelt.

The more science taps into these hidden channels, the more it becomes clear that our brains are tuned to detect a shockingly small fraction of the surrounding reality. Our sensorium is enough to get by in our ecosystem, but is does not approximate the larger picture.

I think it would be useful if the concept of the umwelt were embedded in the public lexicon. It neatly captures the idea of limited knowledge, of unobtainable information, and of unimagined possibilities. Consider the criticisms of policy, the assertions of dogma, the declarations of fact that you hear every day — and just imagine if all of these could be infused with the proper intellectual humility that comes from appreciating the amount unseen.

Post-doctoral fellow, Mind/Brain/Behavior Interfaculty Initiative, Harvard University


We are shockingly ignorant of the causes of our own behavior. The explanations that we provide are sometimes wholly fabricated, and certainly never complete. Yet, that is not how it feels. Instead it feels like we know exactly what we're doing and why. This is confabulation: Guessing at plausible explanations for our behavior, and then regarding those guesses as introspective certainties. Every year psychologists use dramatic examples to entertain their undergraduate audiences. Confabulation is funny, but there is a serious side, too. Understanding it can help us act better and think better in everyday life.

Some of the most famous examples of confabulation come "split-brain" patients, whose left and right brain hemispheres have been surgically disconnected for medical treatment. Neuroscientists have devised clever experiments in which information is provided to the right hemisphere (for instance, pictures of naked people), causing a change in behavior (embarrassed giggling). Split-brain individuals are then asked to explain their behavior verbally, which relies on the left hemisphere. Realizing that their body is laughing, but unaware of the nude images, the left hemisphere will confabulate an excuse for the body's behavior ("I keep laughing because you ask such funny questions, Doc!").

Wholesale confabulations in neurological patients can be jaw-dropping, but in part that is because they do not reflect ordinary experience. Most of the behaviors that you or I perform are not induced by crafty neuroscientists planting subliminal suggestions in our right hemisphere. When we are outside the laboratory — and when our brains have all the usual connections — most behaviors that we perform are the product of some combination of deliberate thinking and automatic action.

Ironically, that is exactly what makes confabulation so dangerous. If we routinely got the explanation for our behavior totally wrong — as completely wrong as split-brain patients sometimes do — we would probably be much more aware that there are pervasive, unseen influences on our behavior. The problem is that we get all of our explanations partly right, correctly identifying the conscious and deliberate causes of our behavior. Unfortunately, we mistake "party right" for "completely right", and thereby fail to recognize the equal influence of the unconscious, or to guard against it.

A choice of job, for instance, depends partly on careful deliberation about career interests, location, income, and hours. At the same time, research reveals that choice to be influenced by a host of factors of which we are unaware. People named Dennis or Denise are more likely to be dentists, while people named Virginia are more likely to locate to (you guessed it) Virginia. Less endearingly, research suggests that on average people will take a job with fewer benefits, a longer commute and a smaller income if it allows them to avoid having a female boss. Surely most people do not want to choose a job based on the sound of their name, nor do they want to sacrifice job quality in order to perpetuate old gender norms. Indeed, most people have no awareness that these factors influence their own choices. When you ask them why they took the job, they are likely to reference their conscious thought processes: "I've always loved making ravioli, the Lira is on the rebound and Rome is for lovers…"  That answer is partly right, but it is also partly wrong, because it misses the deep reach of automatic processes on human behavior.

People make harsher moral judgments in foul-smelling rooms, reflecting the role of disgust as a moral emotion. Women are less likely to call their fathers (but equally likely to call their mothers) during the fertile phase of their menstrual cycle, reflecting a means of incest avoidance. Students indicate greater political conservatism when polled near a hand-sanitizing station during a flu epidemic, reflecting the influence of a threatening environment on ideology. They also indicate a closer bond to their mother when holding hot coffee versus iced coffee, reflecting the metaphor of a "warm" relationship.

Automatic behaviors can be remarkably organized, and even goal-driven. For example, research shows that people tend to cheat just as much as they can without realizing that they are cheating. This is a remarkable phenomenon: Part of you is deciding how much to cheat, calibrated at just the level that keeps another part of you from realizing it.

One of the ways that people pull off this trick is with innocent confabulations: When self-grading an exam, students think, "Oh, I was going to circle e, I really knew that answer!" This isn't a lie, any more than it's a lie to say you have always loved your mother (latte in hand), but don't have time to call your dad during this busy time of the month. These are just incomplete explanations, confabulations that reflect our conscious thoughts while ignoring the unconscious ones.

This brings me to the central point, the part that makes confabulation an important concept in ordinary life and not just a trick pony for college lectures. Perhaps you have noticed that people have an easier time sniffing out unseemly motivations for other's behavior than recognizing the same motivations for their own behavior. Others avoided female bosses (sexist) and inflated their grades (cheaters), while we chose Rome and really meant to say that Anne was the third Brontë. There is a double tragedy in this double standard.

First, we jump to the conclusion that others' behaviors reflect their bad motives and poor judgment, attributing conscious choice to behaviors that may have been influenced unconsciously. Second, we assume that our own choices were guided solely by the conscious explanations that we conjure, and reject or ignore the possibility of our own unconscious biases.

By understanding confabulation we can begin to remedy both faults. We can hold others responsible for their behavior without necessarily impugning their conscious motivations. And, we can hold ourselves more responsible by inspecting our own behavior for its unconscious influences, as unseen as they are unwanted.

Hazel Rose Markus is the Davis-Brack Professor of Behavioral Sciences at Stanford University. Co-author of Doing Race: 21 essays for the 21st century - Alana Conner is a science writer, cultural psychologist, and museum curator, The Tech Museum, San Jose, Calif.

The Culture Cycle

Pundits now invoke culture to explain all manner of tragedies and triumphs, from why a disturbed young man opens fire on a politician, to why African-American children struggle in school, to why the United States can't establish democracy in Iraq, to why Asian factories build better cars. A quick click through a single morning's media, for example, yields the following catch: gun culture, Twitter culture, ethical culture, Arizona culture, always-on culture, winner-take-all culture, culture of violence, culture of fear, culture of sustainability, culture of corporate greed.

Yet no one explains what, exactly, culture is, how it works, or how to change it for the better.

A cognitive tool that fills this gap is the culture cycle, a tool that not only simply describes how culture works, but also clearly prescribes how to make lasting change. The culture cycle is the iterative, recursive process by which 1) people create the cultures to which they later adapt, and 2) cultures shape people so that they act in ways that perpetuate their cultures. In other words, cultures and people (and some other primates) make each other up. This process involves four nested planes: individual selves (their thoughts, feelings, and actions); the everyday practices and artifacts that reflect and shape those selves; the institutions (such as education, law, and media) that afford or discourage certain everyday practices and artifacts; and pervasive ideas about what is good, right, and human that both influence and are influenced by all these levels. (See figure below). The culture cycle rolls for all types of social distinctions, from the macro (nation, race, ethnicity, region, religion, gender, social class, generation, etc.) to the micro (occupation, organization, neighborhood, hobby, genre preference, family, etc.)

One consequence of the culture cycle is that no action is caused by either individual psychological features or external influences. Both are always at work. Just as there is no such thing as a culture without agents, there are no agents without culture. Humans are culturally-shaped shapers. And so, for example, in the case of a school shooting it is overly simplistic to ask whether the perpetrator shot because of either a mental illness or because of his interactions with a hostile and bullying school climate, or with a particularly deadly cultural artifact (i.e., a gun), or with institutions that encourage that climate and allow access to that artifact, or with pervasive ideas and images that glorify resistance and violence. The better question, and the one that the culture cycle requires, is how do these four levels of forces interact? Indeed, researchers at the vanguard of public health contend that neither social stressors nor individual vulnerabilities are enough to produce most mental illnesses. Instead, the interplay of biology and culture, of genes and environments, of nature and nurture is responsible for most psychiatric disorders.

Social scientists succumb to another form of this oppositional thinking. For example, in the face of Hurricane Katrina, thousands of poor African-American residents "chose" not to evacuate the Gulf Coast, to quote most news accounts. More charitable social scientists had their explanations ready, and struggled to get their variables into the limelight. Of course they didn't leave, said the psychologists, because poor people have an external locus of control, low intrinsic motivation, or low self-efficacy. Of course they didn't leave, said the sociologists and political scientists, because their cumulative lack of access to adequate income, banking, education, transportation, healthcare, police protection, and basic civil rights makes staying put is their only option. Of course they didn't leave, said the anthropologists, because their kin networks, religious faith, and historical ties held them there. Of course they didn't leave, said the economists, because they didn't have the material resources, knowledge, or financial incentives to get out.

The irony in the interdisciplinary bickering is that everyone is mostly right. But they are right in the same way that the blind men touching the elephant in the Indian proverb are right: the failure to integrate each field's contributions makes everyone wrong and, worse, not very useful.

The culture cycle captures how these different levels of analyses relate to each other. Granted, our four-level process explanation is not as zippy as the single-variable accounts that currently dominate most public discourse. But it's far simpler and accurate than the standard "it's complicated" and "it depends" answers that more thoughtful experts often supply.

Moreover, built into the culture cycle are the instructions for how to reverse engineer it: a sustainable change at one level usually requires change at all four levels. There are no silver bullets. The ongoing U.S. Civil Rights Movement, for example, requires the opening of individual hearts and mind; and the mixing of people as equals in daily life, along with media representations thereof; and the reform of laws and policies; and fundamental revision of our nation's idea of what a good human being is.

Just because people can change their cultures, however, does not mean that they can do so easily. A major obstacle is that most people don't even realize that they have cultures. Instead, they think that they are standard-issue humans—they are normal; it's all those other people who are deviating from the natural, obvious and right way to be.

Yet we are all part of multiple culture cycles. And we should be proud of that fact, for the culture cycle is our smart human trick. Because of it, we don't have to wait for mutation or natural selection to allow us to range farther over the face of the earth, to extract nutrition from a new food source, or to cope with a change in climate. And as modern life becomes more complex, and social and environmental problems become more widespread and entrenched, people will need to understand and use the culture cycle more skillfully.

Editor, WIRED magazine's UK Edition

Personal data mining

From the dawn of civilisation until 2003, Eric Schmidt is fond of saying, humankind generated five exabytes of data. Now we produce five exabytes every two days — and the pace is accelerating. In our post-privacy world of pervasive social-media sharing, GPS tracking, cellphone-tower triangulation, wireless sensor monitoring, browser-cookie targeting, face-recognition detecting, consumer-intention profiling, and endless other means by which our personal presence is logged in databases far beyond our reach, citizens are largely failing to benefit from the power of all this data to help them make smarter decisions. It's time to reclaim the concept of data mining from the marketing industry's microtargeting of consumers, the credit-card companies' anti-fraud profiling, the intrusive surveillance of state-sponsored Total Information Awareness. We need to think more about mining our own output to extract patterns that turn our raw personal datastream into predictive, actionable information. All of us would benefit if the idea of personal data mining were to enter popular discourse.

Microsoft saw the potential back in September 2006, when it filed United States Patent application number 20,080,082,393 for a system of "personal data mining". Having been fed personal data provided by users themselves or gathered by third parties, the technology would then analyse it to "enable identification of opportunities and/or provisioning of recommendations to increase user productivity and/or improve quality of life". You can decide for yourself whether you trust Redmond with your lifelog, but it's hard to fault the premise: the personal data mine, the patent states, would be a way "to identify relevant information that otherwise would likely remain undiscovered".

Both I as a citizen and society as a whole would gain if individuals' personal datastreams could be mined to extract patterns upon which we could act. Such mining would turn my raw data into predictive information that can anticipate my mood and improve my efficiency, make me healthier and more emotionally intuitive, reveal my scholastic weaknesses and my creative strengths. I want to find the hidden meanings, the unexpected correlations that reveal trends and risk factors of which I had been unaware. In an era of oversharing, we need to think more about data-driven self-discovery.

A small but fast-growing self-tracking movement is already showing the potential of such thinking, inspired by Kevin Kelly's quantified self and Gary Wolf's data-driven life. With its mobile sensors and apps and visualisations, this movement is tracking and measuring exercise, sleep, alertness, productivity, pharmaceutical responses, DNA, heartbeat, diet, financial expenditure — and then sharing and displaying its findings for greater collective understanding. It is using its tools for clustering, classifying and discovering rules in raw data, but mostly is simply quantifying that data to extract signals — information — from the noise.

The cumulative rewards of such thinking will be altruistic rather than narcissistic, whether in pooling personal data for greater scientific understanding (23andMe) or in propagating user-submitted data to motivate behaviour change in others (Traineo). Indeed, as the work of Daniel Kahneman, Daniel Gilbert, and Christakis and Fowler demonstrate so powerfully, accurate individual-level data-tracking is key to understanding how human happiness can be quantified, how our social networks affect our behaviour, how diseases spread through groups.

The data is already out there. We just need to encourage people to tap it, share it, and corral it into knowledge.

Computational Legal Scholar; Assistant Professor of Statistics, Columbia University

Phase Transitions And "Scale Transitions:" Conceptualizing Unexpected Changes Due To Scale

Physicists created the term "phase transition" to describe a change of state in a physical system, such as liquid to gas. The concept has since been applied in a variety of academic circles to describe other types of systems, from social transformations (think hunter-gatherer to farmer) to statistics (think abrupt changes in algorithm performance as parameters change), but has not yet emerged as part of the common lexicon.

One interesting aspect of the concept of the phrase transition is that it describes a shift to a state seemingly unrelated to the previous one, and hence provides a model for phenomena that challenge our intuition. With only knowledge of water as a liquid, who would have imagined a conversion to gas with the application of heat? The mathematical definition of a phase transition in the physical context is well-defined, but even without this precision I argue this idea can be usefully extrapolated to describe a much broader class of phenomena today, particularly those that change abruptly and unexpectedly with an increase in scale.

Imagine points in 2 dimensions — a spray of dots on a sheet of paper. Now imagine a point cloud in three dimensions, say, dots hovering in the interior of a cube. Even if we could imagine points in four dimensions would we have guessed that all these points lie on the convex hull of this point cloud? In dimensions greater than three they always do. There hasn't been a phase transition in the mathematical sense, but as dimension is scaled up the system shifts in a way we don't intuitively expect.

I call these types of changes "scale transitions:" unexpected outcomes resulting from increases in scale. For example, increases in the number of people interacting in a system can produce unforeseen outcomes: the operation of markets at large scales is often counterintuitive, think of the restrictive effect rent control laws can have on the supply of affordable rental housing or how minimum wage laws can reduce the availability of low wage jobs (James Flynn gives  "markets" as an example of a "shorthand abstraction," here I am interested in the often counterintuitive operation of a market system at large scale); the serendipitous effects of enhanced communication, for example collaboration and interpersonal connection generating unexpected new ideas and innovation; or the counterintuitive effect of massive computation in science reducing experimental reproducibility as data and code have proved harder to share than their descriptions. The concept of the scale transition is purposefully loose, designed as a framework for understanding when our natural intuition leads us astray in large scale situations.

This contrasts from Merton's concept of "unanticipated consequences" in that a scale transition refers both to a system, rather than individual purposeful behavior, and is directly tied to the notion of changes due to scale increases. Our intuition regularly seems to break down with scale and we need a way of conceptualizing the resulting counterintuitive shifts in the world around us. Perhaps the most salient feature of the digital age is its facilitation of massive increases in scale, in data storage, processing power, connectivity, thus permitting us to address an unparalleled number of problems on an unparalleled scale. As technology becomes increasingly pervasive I believe scale transitions will become commonplace.

Serial Entrepreneur; Co-founder, eGroups, Inc; Investor

The Power of 10

Any citizen who wants to vote responsibly needs to have a sense of proportion and be able to weigh the choices our democratic government is making quickly and easily.

You can practice thinking on your feet with large numbers, a different skill from what we were taught in primary school, so that you can calculate informed, fact-based opinions on which policies are winning and which may be bankrupt.

You need the ability to make approximate estimates involving large numbers, quickly in your head. The best news I ever heard was: you can multiply numbers by adding their exponents. Or divide them by subtracting their exponents. And the exponent is nothing more than the length of the number in digits. If the first digit is over 3, you can add a half. A painless way to get approximate answers to large-number problems in your head allows you to be more inventive and creative in considering all kinds of business and policy questions.

How can we reason with powers of 10 in real life? Let's use the California High Speed Rail proposal as an example. Most folks either support it because "I Like Trains", or oppose it because "I hate socialism". But a smart person should make a decision using a sense of proportion.

I always start by calculating the cost per person.

The total cost of CA high speed rail is projected to be $45 billion. Using exponents to estimate, remember that a billion is 9 digits. 45 billion adds another digit. Since 4 is larger than 3, you can add another half a digit, and about 1.6 digits. So that is about 10 to the 10.6 .

The California Population is 37 million, so that is about 10 to the 7.5.

To get cost/Californian simply subtract the exponents. 10.6 – 7.5 = 3.1.

Now 10 to the 4.1 is a bit more than $1,000 so we say $1,200 to be in the ballpark of cost of the High Speed Rail project per Californian. Now you have grounds for an informed decision in terms of cost. Some people would save money and carbon emissions if they invested $1,200 in a train. Many Californians will never travel between SF and LA and would be forced to make the same investment, instead of something that would help them with their daily commute. If you could save 10 million Californians 30 minutes a work day, and their free time is worth $8/hour then you have saved each $1000 per year in commuting costs, not counting fuel. That's worth (10^3 * 10&7 = 10&10 ) 10 billion dollars per year. So modest improvements in traffic and gridlock alleviation through commute traffic improvements can provide benefits to pay for themselves quickly.

For your next exercise, you may want to calculate the cost of the Iraq war in dollars per Iraqi. (Maybe 3 trillion dollars, 30 million Iraqis. 10^12.5 / 10^7.5 = 10^5 = 100,000 $/Iraqi.)

Or the cost per American of the 3 million dollar investment the DOE made in the very promising area of Airborne Wind Turbines. (10^6.5 / 10^8.5 = 10^(-2)). So we each spent about a penny on one of the most promising forms of renewable energy. These numbers are thought-provoking, and now they are comparable: do we want to spend:

• $1,200 per Californian on High Speed Rail?
• $100,000 per Iraqi on the Iraq war?
• $.01 per American on renewable energy?

Practice one of these back-of-the-envelope calculations every day and you will have the sense of proportion you need to know which policies to support and keep politicians accountable. It will serve you very well in business and personal finance. It is so easy once you get started!

Associate Professor of Psychology and Neuroscience; Stanford University


Since different visiting teachers had promoted contradictory philosophies, the villagers asked the Buddha whom they should believe. The Buddha advised: “When you know for yourselves ... these things, when performed and undertaken, conduce to well-being and happiness — then live and act accordingly.” Such empirical advice might sound surprising coming from a religious leader, but not from a scientist.

“See for yourself” is an unspoken credo of science. It is not enough to run an experiment and report the findings. Others who repeat that experiment must find the same thing. Repeatable experiments are called “replicable.” Although scientists implicitly respect replicability, they do not typically explicitly reward it.

To some extent, ignoring replicability comes naturally. Human nervous systems are designed to respond to rapid changes, ranging from subtle visual flickers to pounding rushes of ecstasy. Fixating on fast change makes adaptive sense — why spend limited energy on opportunities or threats that have already passed? But in the face of slowly growing problems, “change fixation” can prove disastrous (think of lobsters in the cooking pot or people under greenhouse gases).

Cultures can also promote change fixation. In science, some high profile journals and even entire fields emphasize novelty, consigning replications to the dustbin of the unremarkable and unpublishable. More formally, scientists are often judged based on their work’s novelty rather than replicability. The increasingly popular “h-index” quantifies impact by assigning a number (h) which indicates that an investigator has published h papers that have been cited h or more times (so, Joe Blow has an h-index of 5 if he has published 5 papers, each of which others have cited 5 or more times). While impact factors correlate with eminence in some fields (e.g., physics), problems can arise. For instance, Doctor Blow might boost his impact factor by publishing controversial (thus, cited) but unreplicable findings.

Why not construct a replicability (or “r”) index to complement impact factors? As with h, r could indicate that a scientist has originally documented r separate effects that independently replicate r or more times (so, Susie Sharp has an r-index of 5 if she has published 5 independent effects, each of which others have replicated 5 or more times). Replication indices would necessarily be lower than citation indices, since effects have to first be published before they can be replicated, but might provide distinct information about research quality. As with citation indices, replication indices might even apply to journals and fields, providing a measure that can combat biases against publishing and publicizing replications.

A replicability index might prove even more useful to nonscientists. Most investigators who have spent significant time in the salt mines of the laboratory already intuit that most ideas don’t pan out, and those that do sometimes result from chance or charitable interpretations. Conversely, they also recognize that replicability means they’re really on to something. Not so for the general public, who instead encounter scientific advances one cataclysmic media-filtered study at a time. As a result, laypeople and journalists are repeatedly surprised to find the latest counterintuitive finding overturned by new results. Measures of replicability could help channel attention towards cumulative contributions. Along these lines, it is interesting to consider applying replicability criteria to public policy interventions designed to improve health, enhance education, or curb violence. Individuals might even benefit from using replicability criteria to optimize their personal habits (e.g., more effectively dieting, exercising, working, etc.).

Replication should be celebrated rather than denigrated. Often taken for granted, replicability may be the exception rather than the rule. As running water resolves rock from mud, so can replicability highlight the most reliable findings, investigators, journals, and even fields. More broadly, replicability may provide an indispensable tool for evaluating both personal and public policies. As suggested in the Kalama Sutta, replicability might even help us decide whom to believe.

Quantum Mechanical Engineer, MIT; Author, Programming the Universe

Living is fatal

The ability to reason clearly in the face of uncertainty.  

If everybody could learn to deal better with the unknown, then it would improve not only their individual cognitive toolkit (to be placed in a slot right next to the ability to operate a remote control, perhaps), but the chances for humanity as a whole.

A well-developed scientific method for dealing with the unknown has existed for many years — the mathematical theory of probability. Probabilities are numbers whose values reflect how likely different events are to take place. People are bad at assessing probabilities. They are bad at it not just because they are bad at addition and multiplication.  Rather, people are bad at probability in a deep, intuitive level: they overestimate the probability of rare but shocking events -- a burglar breaking into your bedroom while you're asleep, say. Conversely, they underestimate the probability of common, but quiet and insidious events   — the slow accretion of globules of fat on the walls of an artery, or another ton of carbon dioxide pumped into the atmosphere.  

I can't say that I'm very optimistic about the odds that people will learn to understand the science of odds. When it comes to understanding probability, people basically suck. Consider the following example, based on a true story, and reported by Joel Cohen of Rockefeller University. A group of graduate students note that women have an significantly lower chance of admission than men to the graduate programs at a major university. The data are unambiguous: women applicants are only two thirds as likely as male applicants to be admitted. The graduate students file suit against the university, alleging discrimination on the basis of gender.  When admissions data are examined on a department by department basis, however, a strange fact emerges: within each department, women are MORE likely to be admitted than men. How can this possibly be?  

The answer turns out to be simple, if counterintuitive. More women are applying to departments that have few positions. These departments admit only a small percentage of applicants, men or women. Men, by contrast, are applying to departments that have more positions and that admit a higher percentage of applicants. Within each department, women have a better chance of admission than men — it's just that few women apply to the departments that are easy to get into.  

This counterintuitive result indicates that the admissions committees in the different departments are not discriminating against women. That doesn't mean that bias is absent. The number of graduate fellowships available in a particular field is determined largely by the federal government, which chooses how to allocate reserach funds to different fields. It is not university that is guilty of sexual discrimination, but the society as a whole, which chose to devote more resources — and so more graduate fellowships — to the fields preferred by men.

Of course, some people are good at probability. A car insurance company that can't accurately determine the probabilities of accidents will go broke. In effect, when we pay out premiums to insure ourselves against a rare event, we are buying into the insurance company's estimate of just how likely that event is. Driving a car is one of those common but dangerous processes where human beings habitually understimate the odds of something bad happening, however. Accordingly, some are disinclined to obtain car insurance (perhaps not suprising when the considerable majority of people rate themselves as better than average drivers). When a state government requires its citizens to buy car insurance, it does so because it figures, rightly, that people are underestimating the odds of an accident.  

Let's consider the debate over whether health insurance should be required by law. Living, like driving, is a common but dangerous process where people habitually underestimate risk, despite that fact that, with probability equal to one, living is fatal.

| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >