2017 : WHAT SCIENTIFIC TERM OR CONCEPT OUGHT TO BE MORE WIDELY KNOWN?

w_daniel_hillis's picture
Physicist, Computer Scientist, Co-Founder, Applied Invention.; Author, The Pattern on the Stone

“Impedance” is a measure of how a system resists the inflow of energy. Often, a system can be changed to accept energy more efficiently, by adding an element called an “impedance matcher.” Most people have never heard of an impedance matcher, but once you know about them, you begin to see them everywhere: in the shape of a trumpet, the anti-reflective coating on a lens, and the foam spikes on the inside of a recording booth.

A familiar example of an impedance matcher is the transmission of an automobile, which couples the energy from the engine into the wheels by converting the relatively fast rotation of the engine into the slower, stronger rotation required to propel the car. The transformer on an electric pole solves an analogous problem, converting the high-voltage electricity on the transmission lines to the high-current electricity required to power your home. Lots of machines, from jet engines to radios, depend on impedance matchers to move energy from one part of the system to another.

Some of the most interesting impedance matching occurs when energy comes in the form of a wave. You have probably noticed in a swimming pool that waves from a splash reflect off the sides of the wall. Because there is an impedance mismatch between the water and the wall, the wave energy is unable to couple into the wall, and so it reflects back. If you watch a vertical seawall, you will notice that the reflected wave adds to the height of the original wave, creating a splash that is almost twice the height of the wave. This does not happen when a wave rolls up a gently sloped beach, because the slope acts as an impedance matcher, allowing the wave’s energy to flow into the sand. The foam spikes on the inside of an anechoic recording booth serve much the same function. The sound waves are much larger than the spikes, so they make the foam gradually denser as it gets closer to the wall. This couples the sound energy into the absorbing wall without reflection, so there is no echo.

The flare at the end of a trumpet works on the same principle, in reverse. Without the flare, most of the sound energy would just be reflected back into the trumpet instead of coupling to the outside air. A speaking cone, or megaphone, is another example. The cone acts as an impedance matcher, connecting the sound energy of the voice into the air. This kind of impedance matcher is so effective that it allows the tiny motion of an old Victrola record needle to fill a dance hall with music.

Impedance matching works with light waves as well as sound. The anti-reflective coating on a camera lens is very similar in function to the foam on the sound booth. A mirror, on the other hand, works like the seawall, creating a reflection with an impedance mismatch. By putting only a few molecules of aluminum on the glass we can tune the optical properties of the surface to create an impedance mismatch, a “half-silvered” mirror that reflects only a little bit of the light.

On a larger scale, we can think of atmospheric carbon dioxide as an undesired impedance matcher, coupling the infrared light waves of the sun into the planet. Someday, we may decide to cool our earth by adding tiny particles of dust to our stratosphere, tuning the optical surface to reflect away a tiny portion the infrared waves coming from the sun. The impedance mismatch between the atmosphere and sunlight would create a kind of half-silvered mirror to keep us cooler by reflecting away the unwanted energy flowing into our planet.

richard_dawkins's picture
Evolutionary Biologist; Emeritus Professor of the Public Understanding of Science, Oxford; Author, Books Do Furnish a Life

Natural Selection equips every living creature with the genes that enabled its ancestors—a literally unbroken line of them—to survive in their environments. To the extent that present environments resemble those of the ancestors, to that extent is a modern animal well equipped to survive and pass on the same genes. The ‘adaptations’ of an animal, its anatomical details, instincts and internal biochemistry, are a series of keys that exquisitely fit the locks that constituted its ancestral environments.

Given a key, you can reconstruct the lock that it fits. Given an animal, you should be able to reconstruct the environments in which its ancestors survived. A knowledgeable zoologist, handed a previously unknown animal, can reconstruct some of the locks that its keys are equipped to open. Many of these are obvious. Webbed feet indicate an aquatic way of life. Camouflaged animals literally carry on their backs a picture of the environments in which their ancestors evaded predation.

But most of the keys that an animal brandishes are not obvious on the surface. Many are buried in cellular chemistry. All of them are, in a sense which is harder to decipher, also buried in the genome. If only we could read the genome in the appropriate way, it would be a kind of negative imprint of ancient worlds, a description of the ancestral environments of the species: the Genetic Book of the Dead.

Naturally the book’s contents will be weighted in favour of recent ancestral environments. The book of a camel’s genome describes recent millennia in deserts. But in there too must be descriptions of Devonian seas from before the mammals’ remote ancestors crawled out on the land. The genetic book of a giant tortoise most vividly portrays the Galapagos island habitat of its recent ancestors; before that the South American mainland where its smaller ancestors thrived. But we know that all modern land tortoises descend earlier from marine turtles, so our Galapagos tortoise’s genetic book will describe somewhat older marine scenes. But those marine ancestral turtles were themselves descended from much older, Triassic, land tortoises.  And, like all tetrapods, those Triassic tortoises themselves were descended from fish. So the genetic book of our Galapagos giant is a bewildering palimpsest of water, overlain by land, overlain by water, overlain by land.

How shall we read the Genetic Book of the Dead? I don’t know, and that is one reason for coining the phrase: to stimulate others to come up with a methodology. I have a sort of dim inkling of a plan. For simplicity of illustration, I’ll stick to mammals. Gather together a list of mammals who live in water and make them as taxonomically diverse as possible: whales, dugongs, seals, water shrews, otters, yapoks. Now make a similar list of mammals that live in deserts: camels, desert foxes, jerboas etc. Another list of taxonomically diverse mammals who live up trees: monkeys, squirrels, koalas, sugar gliders. Another list of mammals that live underground: moles, marsupial moles, golden moles, mole rats. Now borrow from the statistical techniques of the numerical taxonomists, but use them in a kind of upside-down way. Take specimens of all those lists of mammals and measure as many features as possible, morphological, biochemical and genetic. Now feed all the measurements into the computer and ask it (here’s where I get really vague and ask mathematicians for help) to find features that all the aquatic animals have in common, features that all the desert animals have in common, and so on. Some of these will be obvious, like webbed feet. Others will be non-obvious, and that is why the exercise is worth doing. The most interesting of the non-obvious features will be in the genes. And they will enable us to read the Genetic Book of the Dead.

In addition to telling us about ancestral environments, the Genetic Book of the Dead can reveal other aspects of history. Demography, for instance. Coalescence analysis performed on my personal genome by my co-author (and ex-student) Yan Wong has revealed that the population from which I spring suffered a major bottleneck, probably corresponding to an out of Africa migration event, some 60,000 years ago. Yan’s analysis may be the only occasion when one co-author of a book has made detailed historical inferences by reading the Genetic Book of the other co-author.

nigel_goldenfeld's picture
Physicist, University of Illinois at Urbana-Champaign

There's a saying that there are no cultural relativists at thirty thousand feet. The laws of aerodynamics work regardless of political or social prejudices, and they are indisputably true. Yes, you can discuss to what extent they are an approximation, what are their limits of validity, do they take into account such niceties as quantum entanglement or unified field theory (of course they don't). But the most basic scientific concept that is clearly and disturbingly missing from today's social and political discourse is the concept that some questions have correct and clear answers. Such questions can be called "scientific" and their answers represent truth. Scientific questions are not easy to ask. Their answers can be verified by experiment or observation, and they can be used to improve your life, create jobs and technologies, save the planet. You don't need pollsters or randomized trials to determine if a parachute works. You need an understanding of the facts of aerodynamics and the methodology to do experiments.

Science's main goal is to find the answers to questions. And the rate of advance of science is determined by how well we can ask sharp, scientific questions, not by the rate at which we answer them. In the field of science with which I identify, condensed matter physics, important new discoveries and new questions arise on the scale of about once every five years. They are mostly answered and worked through on a time scale much less than that. Science is also driven by luck and serendipitous discovery. That can also be amplified by asking good questions. For example, the evolutionary biologist Carl Woese discovered a third domain of life by asking how to map out the history of life using molecular sequences of RNA rather than fossils and superficial appearances of organisms. The widely publicized ennui of fundamental physics is a result of the failure to find a sharp scientific question.

It ought to be more widely known that the truth is indeed out there, but only if one knows how to ask sharp and good questions. This is the unifying aspect of the scientific method and perhaps its most enduring contribution.

 

eduardo_salcedo_albaran's picture
Philosopher; Director, Scientific Vortex, Inc.

Todayneuro-biology provides increasing evidence of how our reason, although powerful, isn’t as constant as most social scientists assume. Now we know that inner brain areas related to homeostatic and basic physiological functions are more relevant and permanent for surviving and evolving, than external areas of the cortex related to cognitive faculties. 

Complex cognitive skills do enrich our mental life, but aren’t critical for regulating physiological functions that are basic for existingIn a sense, we can exist with a poorer mental life—for instance following a damage in our external cortex, but we can’t exist without the basic regulation of our heartbeat or the respiratory rhythm–for instance, following a damage in our Brainstem. 

In fact, as Antonio Damasio has already explained, the Brainstem potentially houses the origin of consciousnessthe complex mental representation of the “self” that we continuously experience in first person. Even slight damages in subareas of the Brainstem lead to comatose and vegetative states, and the permanent lack of consciousness. 

Since the Brainstem’s activity is related to basic physiological and homeostatic functions, its activity is permanent, allowing the continuum construct of the self. Although our external appearance changes during our lifetime, our internal organs and biological functions remain mostly unchanged; if our consciousness is therefore grounded on those constant biological functions, it will also remain mostly constant. 

While regulating basic physiological functions, the Brainstem sends conscious signals to our self through automatic emotions and, specifically, through the conscious feelings of those emotions. Emotions are therefore the initial process linking our physiological needssuch as eating or breathingand the conscious self; the last part of the process consists of the feelings of those emotions, in the form of enriched mental experiences of joy, beauty or sorrow, for instance. 

Emotions, as a reflection of our physiological needs and homeostatic functions, and feelings, as conscious experiences of those emotions, are permanent and preponderant in our life because they link our physiology with our conscious self through the Brainstem 

Acknowledging functions and dynamics of the Brainstem and its subareas is useful for understanding human nature, the one mostly driven by spontaneous, automatic and capricious emotions and feelingsDaniel Kahneman and other theoristsmainly a small fraction among psychologists and neuro-economists, have already called attention to biases and capricious and irrational behaviors, but in most social sciences such knowledge is still uncommon.  

Acknowledging the origin and influence of emotions in human behavior don’t imply abdicating to irrationality, but just using our momentary and powerful rationality to understand our consciousness, and critical part of the regular humans mostly driven by dynamics in the Brainstem.

rory_sutherland's picture
Vice-Chairman, Ogilvy London; Columnist, the Spectator; Author, Alchemy

Having been born in the tiny Welsh village of Llanbadoc 141 years after Alfred Russel Wallace, I have always had a sneaking sympathy with people eager to give Wallace joint billing with Darwin for “The Best Idea Anyone Ever Had. 

Having said which, I don’t think Evolution by Natural Selection was Darwin’s best or most valuable idea. 

Earlier thinkers, from Lucretius to Patrick Matthew, had grasped that there was something inevitably true in the idea of Natural Selection. Had neither Darwin or Wallace existed, someone else would have come up with a similar theory; many practical people, whether pigeon fanciers or dog breeders, had already grasped the practical principles quite well.  

But, for its time, sexual selection was a truly extraordinary, out-of-the-box idea. It still is. Once you understand sexual selectionalong with costly signaling, Fisherian runaway, proxies, heuristics, satisficing and so fortha whole host of behaviors which were previously baffling or seemingly irrational suddenly make perfect sense. 

The body of ideas which fall out of sexual selection theory explain not only natural anomalies such as the peacock’s tailthey also explain the extraordinary popularity of many seemingly insane human behaviors and tastes. From the existence of Veblen Goods such as caviar to more mundane absurdities such as the typewriter.  

(For almost a century during which few men knew how to type, the typewriter must have damaged business productivity to an astounding degree; it meant that every single business and government communication had to be written twice: once by the originator in longhand and then once again by the typist or typing pool. A series of simple amends could delay a letter or memo by a week. But the ownership and use of a typewriter was a necessary expense to signal that you were a serious business. Any provincial solicitor who persisted in writing letters by hand became a tailless peacock.) 

But, take note, I have committed the same offense which everyone else does when writing about sexual selection. I have confined my examples of sexual selection to those occasions where it runs out of control and leads to costly inefficiencies. Typewriters, Ferraris, peacock’s tails. Elks will make an appearance any moment now, you expect. But this is unfair.  

You may have noticed that there are very few famous Belgians. This is because when you are a famous Belgian (Magritte, Simenon, Brel) everyone assumes you are French. 

In the same way, there are few commonly cited examples of successful sexual selection because, when sexual selection succeeds, people casually attribute the success to natural selection.  

But the tension between sexual and natural selectionand the interplay between the two divergent forcesmay be the really big story here. Many human innovations would not have got off the ground without the human status-signaling instinct. (For a good decade or so, cars were inferior to horses as a mode of transportit was human neophilia and status-seeking car races, not the pursuit of “utility, which gave birth to the Ford Motor Company.) So might it be the same in nature? That, in the words of Geoffrey Miller, sexual selection provides the “early stage funding” for nature’s best experiments? So the sexual fitness advantages of displaying ever more plumage on the sides of a bird (rather than, like the peacock, senselessly overinvesting in the rear spoiler) may have made it possible for birds to fly. The human brain’s capacity to handle a vast vocabulary probably arose more for the purposes of seduction than anything else. But most people will avoid giving credit to sexual selection where they possibly can. When it works, sexual selection is called natural selection. 

Why is this? Why the reluctance to accept that life is not just a narrow pursuit of greater efficiencythat there is room for opulence and display as well. Yes, costly signaling can lead to economic inefficiency, but such inefficiencies are also necessary to establish valuable social qualities such as trustworthiness and commitmentand perhaps altruism. Politeness and good manners are, after all, simply costly signaling in face-to-face form. 

Why are people happy with the idea that nature has an accounting function, but much less comfortable with the idea that nature necessarily has a marketing function as well? Should we despise flowers because they are less efficient than grasses? 

If you are looking for underrated, under-promulgated ideas, a good place to start is always with those ideas which, for whatever reason, somehow discomfort people both on the political left and the political right. 

Sexual selection is one such idea. Marxists hate the idea. Neo-liberals don’t like it. Yet, I would argue, when the concepts that underlie itand the effects it has wrought on our evolved psychologyare better understood, it could be the basis for a new and better form of economics and politics. A society in which our signaling instincts were channeled towards positive-sum behavior could be far happier while consuming less. 

But even Russel Wallace hated the idea of sexual selection. For some reason it sits in that important category of ideas which most peopleand intellectuals especiallysimply do not want to believe. Which is why it is my candidate today for the idea most in need of wider transmission.

june_gruber's picture
Assistant Professor of Psychology, University of Colorado, Boulder

Emotions are contagious. They are rapidly, frequently, and even at times automatically transmitted from one person to the next. Whether it be mind-boggling awe watching the supermoon display its lunar prowess or pangs of anger observing palpable racial injustice, one feature remains salient: we can and often do “catch” the emotions of others. The notion that emotions are contagious dates back as early as 1750s when Adam Smith documented the seamless way people tend to mimic the emotional expressions, postures, and even vocalizations of the people we interact with. In the late 1800s, Charles Darwin further emphasized that this highly contagious nature of emotions was fundamental to the survival of humans and nonhumans alike in transmitting vital information among group members. These prescient observations underscore the fact that emotion contagion is pervasive and universal and, hence, why it ought to be more widely known. 

More recent scientific models of emotion contagion expound the features of, and mechanisms by which, we are affected by, and affecting, the emotions of others. Emotional contagion has been robustly supported in laboratory studies eliciting transient positive and negative emotion states among individual participants as well as efforts outside of the laboratory focused on longer-lasting mood states such as happiness among large social networks. Importantly, emotion contagion matters: it is in the service of critical processes such as empathy, social connection, and relationship maintenance between close partners.  

When disrupted, faulty emotion contagion processes have been linked to affective disturbances. With the rapid proliferation of online social networks as a main forum for emotion expression, we know too that emotion contagion can occur without direct interaction between people or when nonverbal emotional cues in the face and body are altogether absent. Importantly, too, this type of contagion itself spreads across a variety of other psychological phenomenon that indirectly or directly involves emotions ranging from kindness, health-related eating behaviors, and even the darker side of human behaviors including violence and racism. Emotion contagion matters, for better and for worse. 

Although well-documented emotion contagion warrants room to widen its own scientific scopeOne uncharted domain my colleagues Nicholas Christakis, Jamil Zaki, Michael Norton, Ehsan Hoque and Anny Dow have been trailblazing is the unchartered realm of positive emotion contagion. Surprisingly, we know little about the temporal dynamics of our positive experiences, including those that both appear to connect us with others and should thereby propagate rapidly (such as joy or compassion) versus those that might socially isolate ourselves from others (such as hubristic pride). Given the vital role positive emotions play in our well-being and physical health, it is critical to better understand the features of how we transmit these pleasant states within and across social groups. Like waves, emotions cascade across time and geographical space. Yet their ability to cascade across psychological minds is unique and warrants wider recognition.

stewart_brand's picture
Founder, the Whole Earth Catalog; Co-founder, The Well; Co-Founder, The Long Now Foundation, and Revive & Restore; Author, Whole Earth Discipline

Wildlife populations are most threatened when their numbers become reduced to the point that their genetic diversity is lost. Their narrowing gene pool can accelerate into what is called an “extinction vortex.” With ever fewer gene variants (alleles), the ability to adapt and evolve declines. As inbreeding increases, deleterious genes accumulate, and fitness plummets. The creatures typically have fewer offspring, many of them physically or behaviorally impaired, susceptible to disease, increasingly incapable of thriving. Most people assume they are doomed, but that no longer has to be what happens.

“Genetic rescue” restores genetic diversity. Conservation biologists are warming to its use with growing proof of its effectiveness. One study of 156 cases of genetic rescue showed that 93% had remarkable success. The most famous case was a dramatic turnaround for the nearly extinct Florida panther. By the mid-1990s only 26 were left, and they were in bad shape. In desperation, conservationists brought in 8 female Texas cougars (which are closely related to the Florida cats). Five of the females reproduced. The result of the outcrossing was a rapid increase in litter success—424 panther kittens born in the next 12 years. The previous population decline of 5% a year reversed to population growth of 4% a year. Signs of inbreeding went away, and signs of increasing fitness grew. Scientists noticed, among other things, that the genetically enriched panthers were becoming harder to capture.

Often genetic diversity can be restored by means as straightforward as connecting isolated populations with wildlife corridors or larger protected areas, but new technological capabilities are broadening the options for genetic rescue. Advanced reproductive technology offers an alternative to transporting whole genetically-distinct parents—artificial insemination has brought genetic refreshment to cheetahs, pandas, elephants, whooping cranes, and black-footed ferrets. With the cost of genome sequencing and analysis coming down, it is becoming possible to examine each stage of genetic rescue at the gene level instead of having to wait for external traits to show improvement. This has already been done with Rocky Mountain bighorn sheep.

Another strategy being considered is “facilitated adaptation.” Different populations of a species face different local challenges. When a particular population can’t adapt fast enough to keep up with climate change, for example, it may be desirable to import the alleles from a population that has already adapted. With gene editing becoming so efficient (CRISPR etc.), the desired genes could be introduced to the gene pool directly. If necessary, the needed genes could even come from a different species entirely. That is exactly what has been done to save the American chestnut from the fungus blight that killed four billion trees early in the 20th century and reduced the species to functional extinction. Two fungus-resistant genes were added from wheat, and the trees were made blight-proof. They are now gradually in the process of being returned to their keystone role in America’s great eastern forest.

One further reservoir of genetic variability has yet to be employed. In museums throughout the world there are vast collections of specimens of species that have been reduced to genetically-impoverished remnant populations in the wild or in captive breeding programs. Those museum specimens are replete with “extinct alleles” in their preserved (though fragmented) DNA. Ancient-DNA sequencing and analysis is becoming so precise, the needed alleles can be identified, reproduced, and reintroduced to the gene pool of the current population, restoring its original genetic diversity. The long-dead can help rescue the needful living.

 

tor_n_rretranders's picture
Writer; Speaker; Thinker, Copenhagen, Denmark

How to respond to change? Try to keep your inner state constant or instead adjust your inner state according to the external change?        

Staying constant inside is the classic idea of homeostasis, pioneered by the physiologist Claude Bernard in 1865 with the name coined by physiologist Walter Cannon in 1926. Homeostasis describes the essential feature of all living things that they define an inside and keep it stable in an unstable environment. Body temperature is a classic example.        

Homeostasis is, however, not very dynamic or darwinian: the business of living creatures is not to optimize their interior state. It is to survive. Whether or not the internal state is stable.        

Therefore, the concept of allostasis was created in the 1980ies by neuroscientist Peter Sterling and coworkers. The word allostasis means a changing state, where homeostasis means staying in about the same state. The idea of allostasis is that the organism will change its inner milieu to meet the challenge from the outside. Blood pressure is not constant, but will be higher if the organism has to be very active and lower if it does not have to.        

Constancy is not the ideal. The ideal is to have the relevant inner state for the particular outer state.        

The stress reaction is an example of allostasis: When there is a tiger in the room it is highly relevant to mobilize all the resources available. Blood pressure and many other parameters go up very quickly. All depots are emptied.        

The emergency stress reaction is a plus for survival, but only when there is a stressor to meet. If the reaction is permanent, it is not relevant, but dangerous.        

Allostasis also brings out another important physiological feature: looking ahead in time. Where homeostasis is about conserving a state and therefore looking back in time, allostasis looks forward. What will be the most relevant inner state in the next moment?        

The role of the brain is essential in allostasis because it predicts the environment and allows for adjustment, so that blood pressure or blood glucose level can become relevant to what is up.       

Although born in physiology, it is likely that the idea of allostasis in the coming years can become important as an umbrella for trends currently fermenting in the understanding of the mind.        

States of mind exemplify the role of relevance: It is not always relevant to be in a good mood. When the organism is challenged, negative emotions are highly relevant. But if there all the time, negative emotions become a problem. When there is no challenge it is more relevant to have positive emotions that will broaden your perspective and build new relationships, as described by psychologist Barbara Frederickson.        

Reward prediction has over the past decades become a key notion in understanding perception and behavior in both robots and biological creatures. Navigation is based on predictions and prediction errors rather than a full mapping of the entire environment. The world is described inside-out by throwing predictions at it and seeing how they work out. Controlled hallucinations has become a common phrase to describe this process of generously imagining or predicting a spectrum of perceptional sceneries—made subject to selection by experience. Very much like the scientific process of making hypotheses and testing them.

Prospection, originally described by Daniel Gilbert in 2005, allows a person to imagine several possible futures and observe the internal emotional reaction to them. Anticipating allostasis.       

Allostasis is an important concept for science because it roots the future-oriented aspects of the mind in bodily physiology.

It is an important concept in everyday life because it points to the importance of embracing change.

bruce_schneier's picture
Fellow and Lecturer, Harvard Kennedy School; Author, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World

There's a concept from computer security known as a class break. It's a particular security vulnerability that breaks not just one system, but an entire class of systems. Examples might be a vulnerability in a particular operating system that allows an attacker to take remote control of every computer that runs on that system's software. Or a vulnerability in Internet-enabled digital video recorders and webcams that allow an attacker to recruit those devices into a massive botnet.

It's a particular way computer systems can fail, exacerbated by the characteristics of computers and software. It only takes one smart person to figure out how to attack the system. Once he does that, he can write software that automates his attack. He can do it over the Internet, so he doesn't have to be near his victim. He can automate his attack so it works while he sleeps. And then he can pass the ability to someone—or to lots of people—without the skill. This changes the nature of security failures, and completely upends how we need to defend against them.

An example: Picking a mechanical door lock requires both skill and time. Each lock is a new job, and success at one lock doesn't guarantee success with another of the same design. Electronic door locks, like the ones you now find in hotel rooms, have different vulnerabilities. An attacker can find a flaw in the design that allows him to create a key card that opens every door. If he publishes his attack software, not just the attacker, but anyone can now open every lock. And if those locks are connected to the Internet, attackers could potentially open door locks remotely—they could open every door lock remotely at the same time. That's a class break.

It's how computer systems fail, but it's not how we think about failures. We still think about automobile security in terms of individual car thieves manually stealing cars. We don't think of hackers remotely taking control of cars over the Internet. Or, remotely disabling every car over the Internet. We think about voting fraud as unauthorized individuals trying to vote. We don't think about a single person or organization remotely manipulating thousands of Internet-connected voting machines.

In a sense, class breaks are not a new concept in risk management. It's the difference between home burglaries and fires, which happen occasionally to different houses in a neighborhood over the course of the year, and floods and earthquakes, which either happen to everyone in the neighborhood or no one. Insurance companies can handle both types of risk, but they are inherently different. The increasing computerization of everything is moving us from a burglary/fire risk model to a flood/earthquake model, which a given threat either affects everyone in town or doesn't happen at all.

But there's a key difference between floods/ earthquakes and class breaks in computer systems: the former are random natural phenomena, while the latter is human-directed. Floods don't change their behavior to maximize their damage based on the types of defenses we build. Attackers do that to computer systems. Attackers examine our systems, looking for class breaks. And once one of them finds one, they'll exploit it again and again until the vulnerability is fixed.

As we move into the world of the Internet of Things, where computers permeate our lives at every level, class breaks will become increasingly important. The combination of automation and action at a distance will give attackers more power and leverage than they have ever had before. Security notions like the precautionary principle—where the potential of harm is so great that we err on the side of not deploying a new technology without proofs of security—will become more important in a world where an attacker can open all of the door locks or hack all of the power plants. It's not an inherently less secure world, but it's a differently secure world. It's a world where driverless cars are much safer than people-driven cars, until suddenly they're not. We need to build systems that assume the possibility of class breaks—and maintain security despite them.

yuri_milner's picture
Physicist; Entrepreneur & Venture Capitalist; Science Philanthropist

In 1977 the Voyager probes were launched toward the outer Solar System, each carrying a Golden Record containing hundreds of sounds and images, from the cry of a newborn baby to the music of Beethoven. In the emptiness of space, they could last for millions of years. By that time, will they be the sole representatives of human culture in the cosmos? Or primitive relics of a civilization that has since bloomed to the galactic scale?

The Drake equation estimates the number of currently communicative civilizations in the Milky Way, by multiplying a series of terms, such as the fraction of stars with planets and the fraction of inhabited planets on which intelligence evolves. The final term in the equation does not get much attention. Yet it is crucial not just for the question of intelligent life, but for the question of how to live intelligently. This is L, the “Longevity Factor,” and it represents the average lifespan of a technological civilization.

What determines this average? Surely the intelligence of the civilizations. The list of existential threats to humanity includes climate change, nuclear war, pandemics, asteroid collisions and perhaps AI. And all of these can be avoided. Some can be addressed here on Earth. Others require activity in space, but with the ultimate aim of protecting the planet.

In 1974, Princeton physicist Gerard K. O’Neill published a paper "The Colonization of Space," which led to the first conference on space manufacturing, sponsored by Stewart Brand’s Point Foundation, and to O’Neill’s highly influential 1976 book, The High Frontier. That has been an inspiration for the current generation of visionaries who are advocating steps such as the transfer of heavy industry into orbit, where it can run on solar energy and direct its heat and waste away from Earth, and the colonization of Mars.

However powerful our local solutions, betting everything on one planet would be imprudent. Stephen Hawking has estimated that “although the chance of a disaster to planet Earth in a given year may be quite low, it adds up over time, and becomes a near certainty in the next thousand or ten thousand years. By that time we should have spread out into space.”

In the long term, Mars must be a stepping-stone to more distant destinations, because two adjacent planets could be simultaneously affected by the Universe’s more violent events, such as a nearby supernova. That means we need to start thinking at the galactic level. The first target might be the Earth-size planet Proxima b, recently discovered around the nearest star to the Sun, 4.2 light years away. Sooner rather than later, we will have to master propulsion fast enough to make interstellar journeys practical. Perhaps by that time we will have developed beyond our organic origins. It has been estimated that von Neumann probes—robots that could land on a planet, mine local materials and replicate themselves—could colonize the entire galaxy within ten million years.

But even a galactic civilization might face existential threats. According to our current understanding of the laws of physics, in any region of space there is a chance that a “death bubble” forms, and then expands at speeds approaching the speed of light. Because the physics inside the bubble would differ from that of ordinary space, as it expanded it would destroy all matter, including life. The chances of this happening in a given year may seem very low—perhaps less than one in ten billion. But as Hawking reminded us, if you wait long enough the improbable is inevitable.

Yet the renowned physicist Ashoke Sen has recently suggested that even in the face of death bubbles, there might be an escape route. The loophole is the accelerating expansion of the Universe. In 1998, astronomers discovered that all galaxies that are not strongly bound together by gravity are moving apart ever faster. This accelerating expansion will eventually carry them beyond each other’s “cosmic horizon”—so far apart that even light from one can never reach the other. That means they can no longer communicate; but on the bright side, they can also never be swallowed by the same death bubble.

So by splitting into daughter civilizations and putting as much distance between them as possible, a civilization could “ride” the expansion of the Universe to relative safety. Of course, another death bubble will eventually pop up within any cosmic horizon, so the remaining civilizations need to keep replicating and parting ways. Their chances of survival depend on how far they can travel: if they were able to move at a substantial fraction of the speed of light, they could raise their chances of survival dramatically. But even those that dispersed only far enough that their galaxies were not bound together by gravity—about 5 million light years apart—might significantly improve their odds.
Such problems may seem remote. But they will be real to our descendants—if we can survive long enough to allow those descendants to exist. Our responsibility as a civilization is to keep going for as long as the laws of physics allow.

The longevity factor is a measure of intelligence: the ability to predict potential problems and solve them in advance. The question is, how intelligent are we?

sam_harris's picture
Neuroscientist; Philosopher; Author, Making Sense

Wherever we look, we find otherwise sane men and women making extraordinary efforts to avoid changing their minds.

Of course, many people are reluctant to be seen changing their minds, even though they might be willing to change them in private, seemingly on their own terms—perhaps while reading a book. This fear of losing face is a sign of fundamental confusion. Here it is useful to take the audience’s perspective: Tenaciously clinging to your beliefs past the point where their falsity has been clearly demonstrated does not make you look good. We have all witnessed men and women of great reputation embarrass themselves in this way. I know at least one eminent scholar who wouldn’t admit to any trouble on his side of a debate stage were he to be suddenly engulfed in flames.

If the facts are not on your side, or your argument is flawed, any attempt to save face is to lose it twice over. And yet many of us find this lesson hard to learn. To the extent that we can learn it, we acquire a superpower of sorts. In fact, a person who surrenders immediately when shown to be in error will appear not to have lost the argument at all. Rather, he will merely afford others the pleasure of having educated him on certain points.

Intellectual honesty allows us to stand outside ourselves and to think in ways that others can (and should) find compelling. It rests on the understanding that wanting something to be true isn’t a reason to believe that it is true—rather, it is further cause to worry that we might be out of touch with reality in the first place. In this sense, intellectual honesty makes real knowledge possible.

Our scientific, cultural, and moral progress is almost entirely the product of successful acts of persuasion. Therefore, an inability (or refusal) to reason honestly is a social problem. Indeed, to defy the logical expectations of others—to disregard the very standards of reasonableness that you demand of them—is a form of hostility. And when the stakes are high, it becomes an invitation to violence.

In fact, we live in a perpetual choice between conversation and violence. Consequently, few things are more important than a willingness to follow evidence and argument wherever they lead. The ability to change our minds, even on important points—especially on important points—is the only basis for hope that the human causes of human misery can be finally overcome. 

susan_blackmore's picture
Psychologist; Visiting Professor, University of Plymouth; Author, Consciousness: An Introduction

Where does all the design in the universe come from? Around me now, and indeed almost everywhere I go, I see a mixture of undesigned and designed things: rocks, stars and puddles of rain; tables, books, grass, rabbits and my own hands.  

The distinction between designed and undesigned is not commonly made this way. Typically people are happy to divide rocks from books on the grounds that rocks were not designed for a function while books were. Books have an author, a publisher, a printer, and a cover designer, and that means real top-down design. As for grass, rabbits and handsthey do serve functions but they evolved through a mindless bottom-up process, and that is not real design.  

This other distinction between real design and evolved design is sometimes explicitly stated, but even when its not, the fear of attributing design to a mindless process is revealed in the scare quotes that evolutionary biologists sometimes put around that word design. Eyes are brilliantly designed for seeing and wings are designed for flying. But they are only as if designed. They were produced not top-down by a mind with a plan but bottom-up by an utterly mindless process. In other words, our human real design is different. 

If the concept of replicator power were better known this false distinction might be dropped because we would see that all design depends on the same underlying process. A replicator is information that affects its environment so as to make new copies of itself. Its power derives from its role as information undergoing the evolutionary algorithm of copying with variation and selectionthe process that endlessly increases the available informationGenes are the most obvious example, with the varying creatures they give rise to being known as their vehicles or interactors. It is these that are acted on by natural selection to determine the success or failure of the replicator 

The value of the term replicator lies in its generality. This is what Dawkins emphasised in writing about universal Darwinism—applying Darwin’s basic insight to all self-replicating information. Whenever there is a replicator in an appropriate environment, design will ensue. This is why Dawkins invented the term meme, to show that there is more than one replicator evolving on this planet. He also made the claim, which follows from this way of thinking about evolution, that all life everywhere evolves by the differential survival of replicators. I would add that all intelligence everywhere evolves by the differential survival of replicators. 

This makes clear that human design is essentially no different from biological design. Both depend on a replicator being copied, whether what is copied is the order of bases in a molecule of DNA or the order of words in a book. In the molecular case new sequences arise from copying errors, mutations and recombination. In the writing of a book new sequences arise from a person recombining familiar words into new phrases, sentences and paragraphs. In both examples multiple different sequences are created and very few survive to be copied again. In both examples creatively designed products emerge through replicator power. To see human design this way ito drop the assumption that top-down control, intelligence and planning are essential to creativity—to see that these capacities and the designs they create are consequences, not causes, of evolution.  

Accepting this may be uncomfortable as it means seeing that everything we think we designed all by ourselves was really designed by a clueless bottom-up process using us as copying machinery. The unease may be similar to that reputed to have been expressed by the Bishop of Birmingham’s wife—that such knowledge belittles us and diminishes our humanity and power. Yet we have (more or less and in only some parts of the world) learned to embrace rather than fear the knowledge that our bodies evolved by mindless bottom-up processes. This is another step in the same direction. 

The significance of recognising replicator power is that there are other replicators out there and may soon be more. The mindless processes that turned us from an ordinary ape into a speaking, meme-copying ape allowed us to produce tables, books, cars and planes, and—in a crucial step—copying machines. These include writing leading to printing presses, potters’ wheels and woodworking tools leading to factories, and now computers leading to the information explosion.  

Artificial intelligences, whether confined to desktop boxes and robots or distributed in cyber space, have been created by replicator power just as our own intelligence was created by replicator powerThey are evolving far faster than we did and may yet give rise to further even faster replicators. That power will not stop because we want it to. And its products will certainly not be impressed by our claims to be in control or to have designed the machinery on which they thrive. That intelligence will continue to evolve, and the sooner we accept the idea of replicator power the more realistic we are likely to be about the future of our life with intelligent machines.

dustin_yellin's picture
Artist; Founder, Pioneer Works

Time is a scientific concept that deserves greater thought and study, though, despite advancements in the mathematical behaviors of time over long horizons under what I would call extreme conditions, it is a concept we will never be positioned to properly understand. 

Time is not simply a matter of duration; time is movement, motion, transition and change. It’s not static, to be ticked off, eliminated. If we consider one of the best known implications of e=mc2time is relative, it depends upon perspective. Black holes, as hungry-hippos of matter, could be considered a waste of time. Some say time flies when you’re having fun, lending colloquial truth to the notion time is relative.  

There are boundaries to time in the temporal/spatial sphere just as there are in the sphere of neuro-chemical subjectivity. Time varies according to experience, perception of pain, focus, thought or its absence. Meditation can make time meaningless. Mindlessness, in a sense, does away with time, just as enjoyment, like a TV-movie, is often mindless, and over before we know it.  

Self-help books talk confidently about making time work for you; so do investment advisors. Physicists and gods laugh when we make plans. Some say all matter is basically fixed; others say bodies reconstitute molecularly, and most agree neither energy nor matter are created or destroyed. Since 1662, we’ve invoked the immortal by the graveside prayer, “ashes to ashes, dust to dust, so shall we all cycle in this eternal recurrence. Time plays the trump in every hand. 

How does consciousness change with time? The serendipitous morphology of your world and your body is a brief flicker. We grapple with time, we fight gravity always seeking to bury us, we lean into the wind of prevailing wisdom; we rage against the uncaring forces of the cosmos, the amoral tyrant that is time itself. 

Yet time is also the giver of life. It allows cell division growth, love—complex states are achieved through patterned development of eons of adaptive change; new states are reproduced through genetic programming and chromosomal mutations. Time gives our lives genetic high fidelity and brings the record player to the party too. 

What is matter but a cousin of time? Or a relative, at least. Matter is inextricably linked to time in a relationship of cause and effect, effect and cause; now and again the quantum world behaves like a good boy should. 

Lifespan is time-dependent, geology is time-dependent, volcanic belly-dancing and sculpted canyons are nothing if not the gorgeous supple movements of time, matter, and energy dancing to a particular slo-mo rhythm. The canyons of Zion, the craters of our Moon, the birth of the baby Krakatoa, the giant born from the belly of the sea—all miracles of time no less marvelous for being partially understood. 

Time is judge and jury, but perhaps not the best conversationalist. Time’s a little slow on the uptake. Each of us is married to time; divorce is impossibleThought, repeated over time, is part of the dowry the universe has given us. We have the ability to direct the actual arrangement and neurological formation of our brains by grappling regularly with the Sunday New York Times crossword puzzlesuch mind-bending activities keep the mind kept elastic and supple.  

Zen meditation, cyclical breathing, yoga, Tantrathese are ways of taking time for a walk, though in the end, perhaps time is holding the leash and we are being led inexorably to an end most of us struggle mightily to frustrate. Time is the totalitarian ticket-seller of existence, the wizard behind the curtain, the puppeteer. 

The plastic arts are ruled by time too. If you can’t keep time behind the drum kit, you’re out of the band. The cathartic transcendence of music is rooted in meter, rhythm, repetition, leitmotif, climax—time is the key to what makes music particularly moving. Consider for a moment electronic music. Drum beats are easily quantized, made to perfectly fix mathematical rhythm, but the effect is inhumane, robotic; the beat seems to slam you over the head like a hangover. It is the intersection between time and human frailty that makes everyone want to smile and dance, to feel the pocket of the beat.  

Whatever may be the proper domain of time, clearly it is beyond human reckoning. Time is godlike; it is both completely outside of our ken and inextricable from the best parts of life. Eternal life, arguably, would be less precious than a finite one, and it is because we lay our heads at the feet of time that we are able to find joy in life, to the extent we do. All good things come to those who wait, goes another saying, and I argue that time—though we live by it every day and die by it in the great goodnight—time is a concept both otherworldly and mundane, fresh and worn. Our understanding of it will never be complete. 

beatrice_golomb's picture
Professor of Medicine at UCSD

Enshrined in the way much medical research is done is the tacit assumption that an exposure has an effect on an outcome. To quote Wikipedia: “Effect modification occurs when the magnitude of the effect of the first exposure on the outcome—the association—differs depending on the level of a third variable. In this situation, computing the overall effect of the association is misleading.”  

The reach of Effect Modification (EM) is wildly underappreciated. Implications challenge the core of how medicineand indeed scienceare practiced.  

The results of randomized, double-blinded, placebo-controlled trials (RCTs) are the foundation on which modern medicine rests. RCTs are exalted as the gold standard of study designs. These define treatment approaches, which are propelled into use through “clinical practice guidelines,” with teeth in their implementation imposed through “performance pay” to doctors. Butthere is a problem. Often, a single estimate of effect is generated for a given outcome, and presumed to apply generally (and perhaps, if favorable, to impel treatment for those outside the study). 

But, results of studies—including but not exclusively RCTs—may not apply to those outside the studyThey may not apply to some who are enrolled in the study.  They may not, in fact, apply to anyone in the study.  

The chief recognized problem inherent to RCTs is generalizability (sometimes called “external validity”). Results need not apply to types of people outside of the study (e.g. studies of men may not apply to women)because of potential for EM. Less appreciated is that EM also means that results of a trial need not apply to all those within the study. To ascribe what is true for a whole to its parts is to succumb to the “fallacy of division.”  

It is seldom appreciated that EM can engender differences not only in the magnitudebut in the sign of effect. When subsets experience opposite effects, a neutral finding for the overall study can apply to none of its participants. 

Bidirectional EM effectslet’s call this “Janus Effects”—are not rarePenicillin can save lives, but can cost life in the highly allergic. A surgery may save lives, but take lives of poor surgical candidates. Fluoroquinolone antibiotics can raise and lower blood sugarBenzodiazepines anxiolytics can “paradoxically” increase anxietyBisphosphonates to prevent fractures, cause “pathological fractures.” Statins prevented new diabetes (WOSCOPS trial), and promoted it (JUPITER trial); reduced cancer deaths (JUPITER trial), but significantly increased new cancer (PROSPER trial, the sole trial in age >70). 

How is this possibleOne factor is that agents that can yield antioxidant effects (like statins)are almost always prooxidant in some patients and settings—including at high doses, where co-antioxidants are depletedConversely agents that can have prooxidant effects can, not uncommonly, have antioxidant effects in sufficiently low doses, for some people  via “oxidative preconditioning, by which a bit of oxidative stress ramps up endogenous antioxidant defenses. For agents meant to alter an aspect of physiologycounterregulatory mechanismsimposed by evolution—may partly offset the intended effects and in some people overshoot. So drugs and salt restriction that are meant to lower blood pressure, paradoxically raise blood pressure in some 

Toomany exposures activate multiple mechanismswhich may act in opposition on an outcomeImbibing alcohol can prevent stroke, via antioxidant polyphenols and thinning the blood; and can promote stroke, via mitochondrial dysfunction, arrhythmia, and hypertension (or by thinning the blood too much)Swilling coffee is linked to reduced heart attacks in genetically fast caffeine metabolizers (likely through antioxidant effects), but increased heart attacks in slow caffeine metabolizers (likely via caffeine-induced ~adrenergic ones). 

Implications are rife.  When studies of the same intervention produce different, or even oppositeresultsthis apparent nonreproducibility” need not mean there were study flaws, as is often presumed: Contradictory” results may all be true. 

For Janus effects, whether exposure effects on an outcome are favorableadverse, or neutral may depend on the composition of the study groupEvidence supports such bidirectional effects for statins with outcomes including diabetes, cancer, and aggression. Selection of a study group that yieldsizeable “benefit” (or for environmental exposures, no harm) may drive a product to be recommended, or exposure mandated, for vast swaths of the populacewith potential for harm to many. 

Individual experiences that controvert RCT results should not be scorned, even if a source of EM is not (yet) known. Those who observed their blood sugar rise on statins, were dismissed—given neutral average statin effects on glucose in RCTs, then disparaged more contemptuously after the WOSCOPS trial reported that statins reduced diabetes risk. Later, multiple other trials, and meta-analyses, showed statins can increase diabetes incidenceThat was equally true, of course, before these later trials were published(Now that it is accepted that statins can increase glucose, recognition that statins can reduce glucose has faded.) 

So, conventional thinking about studies’ implications must be jettisoned. It would be convenient if the observed association in a good quality study could be thought the final word. But, EM may be more the rule than the exceptionat least in complex domains like biology and medicine—whence “computing the overall effect” can be misleading. An effect cannot be presumed to reliably hew to what any study “shows” in magnitude, or even direction   

The pesky play of effect modification must be borne in mind.

seth_lloyd's picture
Professor of Quantum Mechanical Engineering, MIT; Author, Programming the Universe

The word "meme" denotes a rapidly spreading idea, behavior, or concept. Richard Dawkins originally coined the term to explain the action of natural selection on cultural information. For example, good parenting practices lead to children who survive to pass on those good parenting practices to their own children. Ask someone under twenty-five what a meme is, however, and chances are you will get a different definition: typing "meme" into Google Images yields page after page of photos of celebrities, babies, and kittens, overlaid with somewhat humorous text. Memes spread only as rapidly as they can reproduce. Parenting is a long-term and arduous task that takes decades to reproduce itself. A kitten photo reproduces in the few seconds it takes to resend. Consequently, the vast majority of memes are now digital, and the digital meaning of meme has crowded out its social and evolutionary meaning. 

Even in their digital context, memes are still usually taken to be a social phenomenon, selected and re-posted by human beings. Human beings are increasingly out of the loop in the production of viral information, however. Net bots who propagate fake news need not read it. Internet viruses that infect unprotected computers reproduce on their own, without human intervention. An accelerating wave of sell orders issued by high-frequency stock trading programs can crash the market in seconds. Any interaction between systems that store and process information will cause that information to spread; and some bits spread faster than other bits. By definition, viral information propagates at an accelerating rate, driving stable systems unstable. 

Accelerating flows of information are not confined to humans, computers, and viruses. In the 19th century, physicists such as Ludwig Boltzmann, James Clerk Maxwell, and Josiah Willard Gibbs recognized that the physical quantity called entropy is in fact just a form of informationthe number of bits required to describe the microscopic motion of atoms and molecules. At bottom, all physical systems register and process information. The second law of thermodynamics states that entropy tends to increase: this increase of entropy is nothing more or less than the natural tendency of bits of information to reproduce and spread. The spread of information is not just a human affair, it is as old as the universe. 

In systems governed by the laws of gravitation, such as the universe, information tends to spread at an accelerating rate. This accelerating spread of information stems from a centuries-old observation in classical mechanics called the virial theorem. The virial theorem (from the Latin "vis" or "strength," as opposed to the Latin "virus" or "slimy poison"implies that when gravitating systems lose energy and information, they heat up. A massive cloud of cool dust in the early universe loses energy and entropy and clumps together to form a hot star. As the star loses energy and entropy, radiating light and heat into the cold surrounding space, the star grows hotter, not colder. In our own star, the sun, orderly flows of energy and information between the sun's core, where nuclear reactions take place, and its outer layers, resulting in stable and relatively constant radiation for billions of years. A supermassive star, by contrast, radiates energy and information faster and faster, becoming hotter and hotter in the process. Over the course of a few hundred thousand years, the star burns through its nuclear fuel, its core collapses to form a black hole (an event called the "gravothermal catastrophe"), and the outer layers of the star explode as a supernova, catapulting light, energy, and information across the universe. 

Accelerating flows of information are a fundamental part of the universe: we can't escape them. For human beings, the gravitational instability implied by the virial theorem is a blessing: we would not exist if the stars had not begun to shine. The viral nature of digital information is less blessed. Information that reproduces itself twice in a second wins out over information that only reproduces once a second. In the digital memes ranked as most popular by Google Images, this competition leads to a race to the bottom. Subtlety, intricacy, and nuance take longer to appreciate, and so add crucial seconds to the digital meme reproduction process, leading to a dominance of dumb and dumber. Any constraint that puts information at a disadavantage in reproducing causes that information to lose out in the meme-race. Truth is such a constraint. Fake news can propagate more rapidly than real news exactly because it is unconstrained by reality, and so can be constructed with reproduction as its only goal. The faulty genetic information contained in cancerous cells can propagate faster than correct genetic information because cancer cells need not respond to the regulatory signals sent to them by the body. 

Human society, living organisms, and the planets, stars, and galaxies that make up the universe all function by the orderly exchange of information. Social cues, metabolic signals, and bits of information carried by the force of gravity give rise to societies, organisms, and to the structure of the universe. Chaos, by contrast, is defined by the explosive growth and spread of random information. Memes used to be cultural practices that propagated because they benefited humanity. Accelerating flows of digital information have reduced memes to kitten photos on the Internet. When memes propagate so rapidly they lose their meaning, watch out!

joichi_ito's picture
Director, MIT Media Lab; Coauthor (with Jeff Howe), Whiplash: How to Survive Our Faster Future

Humans have diversity in neurological conditions. While some, such as autism are considered disabilities, many argue that they are the result of normal variations in the human genome. The neurodiversity movement is an international civil rights movement that argues that autism shouldn’t be “cured” and that it is an authentic form of human diversity that should be protected. 

In the early 1900s eugenics and the sterilization of people considered genetically inferior were scientifically sanctioned ideas, with outspoken advocates like Theodore Roosevelt, Margaret Sanger, Winston Churchill and US Supreme Court Justice Oliver Wendell Holmes Jr. The horror of the Holocaust, inspired by the eugenics movement, demonstrated the danger and devastation these programs can exact when put into practice.  

Temple Grandin, an outspoken spokesperson for autism and neurodiversity argues that Albert Einstein, Wolfgang Mozart and Nikola Tesla would have been diagnosed on the “autistic spectrum” if they had been alive today. She also believes that autism has long contributed to human development and that “without autism traits we might still be living in caves.” Today, non-neurotypical children often suffer through a remedial programs in the traditional educational system only to be discovered to be geniuses later. Many of these kids end up at MIT and other research institutes.  

With the invention of CRISPR the possibility of editing the human genome at scale has suddenly become feasible. The initial applications that are being developed involve the “fixing” of genetic mutations that cause debilitating diseases, but they are also taking us down a path with the potential to eliminate not only autism but much of the diversity that makes human society flourish. Our understanding of the human genome is rudimentary enough that it will be some time before we are able to enact complex changes that involve things like intelligence or personality, but it’s a slippery slope. I saw a business plan a few years ago that argued that autism was just “errors” in the genome that could be identified and “corrected” in the manner of “de-noising” a grainy photograph or audio recording. 

Clearly some children born with autism are in states that require intervention and have debilitating issues. However, our attempts to “cure” autism, either through remediation or eventually through genetic engineering, could result in the eradication of a neurological diversity that drives scholarship, innovation, arts and many of the essential elements of a healthy society. 

We know that diversity is essential for healthy ecosystems. We see how agricultural monocultures have created fragile and unsustainable systems. 

My concern is that even if we figure out and understand that neurological diversity is essential for our society, I worry that we will develop the tools for designing away any risky traits that deviate from the norm, and that given a choice, people will tend to opt for a neuro-typical child. 

As we march down the path of genetic engineering to eliminate disabilities and disease, it’s important to be aware that this path, while more scientifically sophisticated, has been followed before with unintended and possibly irreversible consequences and side-effects.

w_tecumseh_fitch's picture
Professor of Cognitive Biology, University of Vienna; Author, The Evolution of Language

Some memes are fortunate at birth: they represent clear new concepts, are blessed with a memorable name, and have prominent intellectual "parents" who ably shepherd them through the crucial initial process of dissemination, clarification and acceptance. "Meme" itself is one of those lucky memes. Many other memes, however, are less fortunate in one or more of these respects, and through no fault of their own languish, for decades or even centuries, in the shadows of their highborn competitors. 

The core evolutionary concept of "change of function" is one of these unfortunate memes. It was one of Darwin's key intellectual offspring in The Origin—"the highly important fact that an organ originally constructed for one purpose… may be converted into one for a widely different purpose." It played a central role in Darwin's thinking about the evolution of novelty, particularly when a new function requires an already complex organ (e.g. lungs for breathing or wings for flying).  

But unlike its more successful sibling memes, "natural selection" and "adaptation," Darwin never even bothered to name this idea himself. It was left to later writers to coin the term "pre-adaptation," with its unfortunate implicit connotations of evolutionary foresight and pre-planning. And as "pre-adaptation" the meme languished until 1982 when it was adopted, spruced up and re-baptized as "exaptation" by Stephen Jay Gould and Elisabeth Vrba. The new word's etymology explicitly disowns any teleological implications and focuses attention on the conceptually key evolutionary moment: the change in function. 

To illustrate exaptation, consider the many useful organs that are embryologically derived from the branchial arches, which originated as stiffeners for the water-pumping and filtering pharynx of our invertebrate ancestors, and then developed into the gill bars of early fish (and still serve that function alone in a few surviving jawless fish, like lampreys, today). Each arch is complex, containing cartilage, muscles, nerves and blood vessels, and there are typically six pairs of them running serially down the neck. 

In the first exaptation, the front-most gill bars were converted into biting jaws in the first jawed fish, ancestral to all living terrestrial vertebrates, while the pairs of arches behind them kept supporting gills. But when these fishy forebears emerged fully onto land, and water-breathing gills became superfluous, there was suddenly a lot of prime physiological real estate up for grabs. And like cheap loft space, subsequent evolution has creatively come up with diverse new functions for these tissues.  

Numerous novelties stem, today, from the branchial arches. In humans, both our external ears and middle ear bones (themselves derived exaptively from early tetrapod jaw bones) are branchial arch derivatives, as is our tongue-supporting hyoid skeleton, and our sound-producing larynx. Thus virtually all the hardware used for speech and singing was derived, in multiple exaptive steps, and via multiple different physiological functions, from the gill bars of ancestral fish. Such innovative changes in function, shaped and sculpted to their new use by subsequent natural selection, play a central role in the evolution of many novel traits. 

As this example illustrates, and Darwin emphasized, change of function is everywhere in biology and is thus deserves to be a core part of our conceptual toolkit for evolutionary thinking. But unfortunately our poor but deserving meme's bad luck was not to end in 1982, because Gould and Vrba were somewhat overzealous in their championship of the concept, and implied that any trait which had undergone a change in function deserves the name "exaptation." But, given how widespread change of function is, this move would rename many or even most adaptations as exaptations in one imperious terminological stroke. By pitting exaptation against adaptation, our poor meme was disadvantaged again, since no one is likely to give up on that term. 

For exaptation to be a useful term, it should be interpreted (much as Darwin originally suggested) as one important phase in evolution: the initial stage in which old organs are put to new use, for which they will typically be only barely functional. Subsequent "normal" natural selection of small variants will then gradually shape and perfect exaptations to their new function, at which point they become ordinary adaptations again. We can thus envision an exaptive cycle as being at the heart of many novel evolutionary traits: first adaptation for some function, then exaptation for a new function, and finally further adaptive tuning to this new function. A trait's tenure as an exaptation should thus typically be brief in evolutionary terms: a few thousand generations should suffice for new mutations to appear and shape it to its new function. 

I believe exaptation to be a concept of central importance not only for bodily organs, but also for the evolution of mind and brain (e.g. for the evolution of language). Much of what we use our brains for in modern times represents a change in function (e.g. piloting airplanes from basic visually-guided motor control, or mathematical thinking from some basic precursor concepts of number and geometry). These very new cognitive abilities (and many others, like reading) are clearly exaptations, with no further shaping by natural selection (yet). But debate rages about whether older but still-recent human capacities like linguistic syntax have yet been tailored by natural selection to their current role (proposed cognitive precursors for linguistic syntax include hierarchical social cognition as seen in primates, or hierarchical motor control as seen in many vertebrates).  

But before these issues can be clearly discussed and productively debated, the long-suffering meme of exaptation must be clearly defined, fully understood, and more widely appreciated. Contemporary theorists' interpretations should fuse the best components of its chequered past: Darwin's concept and Gould and Vrba's term. Only then can exaptation finally take its rightful place at the high table of evolutionary thought.

diana_reiss's picture
Professor, Department of Psychology Hunter College; Author, The Dolphin in the Mirror

Anthropomorphism is the attribution of human characteristicsqualities, motivations, thoughts, emotions, and intentions to non-human beings and even non-living objects. Writers and poets have freely used anthropomorphism in fictional and non-fictional narratives and, although this attribution has been a powerful and effective artistic device, in science it has been largely abandoned.  

The successes of the reductionist approach in physics and chemistry motivated similar methodologies in biology and psychology. Anthropomorphism was considered an error in the context of scientific reductionism. However, it has remained curiously effective in certain areas of biology and psychology despite being controversial. For example, the very inspiration for this year’s Edge Question, Richard Dawkins introduction of the selfish gene meme, was a brilliant use of anthropomorphism (selfishness) to introduce a crucial concept in evolutionary theory.   

Thus the view that the tendency to anthropomorphize is a source of error needs to be reconsidered.  In his 1872 book The Expression of the Emotions in Man and AnimalsCharles Darwin proposed evolutionary continuity in the animal world that extended beyond that of morphology into the realms of behavior and the expression of emotion and argued that emotions evolved via natural selection in other animals as well and may be a substrate of their behavioral experiences.  

Darwin’s colleague, George Romanesa Canadian-English evolutionary biologist and physiologist who is considered the father of the field of comparative psychology, expressed similar views. Darwin and Romanes views about animal behavior and emotions were criticized as being anthropomorphic and anecdotal and resulted in a scientific backlash that gave way to the rise of Behaviorism.  Although not denying the existence of cognitive processes in humans and other animals, behaviorist epistemology denied the ability to study them, focusing instead on studying observable behavior. But behaviorism ultimately failed to account for the complexity and richness observed in both human and animal behavior, and by the mid 1950’s the birth of the cognitive revolution was underway—leading to increased research on animal cognitive and emotional processes and their underpinnings 

In 1976, a small book entitled the Question of Animal Awareness by Rockefeller University zoologist Donald Griffin was publishedIn his book, Griffin compares human brain processes with those of other animals. “Other vertebrate animals also have very complicated brains, and in some cases brains which appear to be physically very much like our own; this suggests that what goes on in animal brains has a good deal in common with what goes on in human brains; laboratory experiments on animal behavior provide some measure of support for this suggestion.”  

Griffin’s small book seeded a new field of cognitive ethology, the marriage of cognitive science and ethology in which scientists asked questions about the mental states of animals based on their interactions with their environment.  

It is refreshing to see the reawakening of the use of anthropomorphic language as a tool towards understanding the cognitive life of other animals in the context of systematic studies of social behavior and our knowledge of the structure and complexities of the brains of other species.  And we shouldn’t fear or be fooled by the “ism” at the end of the term “anthropomorphism” as it is not a school of thought or an ideology.  

Rather, anthropomorphism provides an alternative “model” to help us to interpret behavior. In the spirit of George Box’s famous dictum about all models being wrong, but some being useful, so anthropomorphism remains surprisingly useful in animal cognition studies.  And it is useful because it allows us to understand and widen our appreciation of the similarities between other animals and ourselves.  

Anthropomorphism provides of view of continuity between the mental life of humans and other species in contrast with often touted discontinuities – those traits that divide us from the rest of the animal world.  

Frans de Waal has suggested:  "To endow animals with human emotions has long been a scientific taboo. But if we do not, we risk missing something fundamental, about both animals and us. An epistemology that allows scientists to use anthropomorphism as a tool to investigate and interpret behavior can enable us to see what English anthropologist, systems thinker and linguist, Gregory Bateson, states as, seeing the patterns that connect us”. He suggests an anthropomorphic approach to understanding other species by posing the question “What is the pattern which connects all the living creatures? This question can be asked at the morphological, behavioral and emotional level. By anthropomorphizing we may see evolutionary patterns that connect us to the rest of the animal world. And of course, the opposite of anthropomorphism is dehumanizationand we all know where that can lead us.  

david_pizarro's picture
Associate Professor of Psychology, Cornell University

Why, in an age in which we have the world’s information easily accessible at our fingertips, is there still so much widespread disagreement between people about basic facts? Why is it so hard to change people’s minds about truth even in the face of overwhelming evidence?  

Perhaps some of these inaccurate beliefs are the result of an increase in the intentional spreading of false information, a problem exacerbated by the efficiency of the Internet. But false information has been spread pretty much since we’ve had the ability to spread information. More importantly, the same technologies that allow for the efficient spreading of false information also provide us with the ability to fact-check our information more efficiently. For most questions we can find a reliable, authoritative answer easier than anyone has ever been able to in all of human history. In short, we have more access to truth than ever. So why do false beliefs persist?  

Social psychologists have offered a compelling answer to this question: The failure of people to alter their beliefs in response to evidence is the result of a deep problem with our psychology. In a nutshell, psychologists have shown that the way we process information that conflicts with our existing beliefs is fundamentally different from the way we process information that is consistent with these beliefs, a phenomenon that has been labeled "motivated reasoning." Specifically, when we are exposed to information that meshes well with what we already believe (or with what we want to believe), we are quick to accept it as factual and true. We readily categorize this information as another piece of confirmatory evidence and move along. On the other hand, when we are exposed to information that contradicts a cherished belief, we tend to pay more attention, scrutinize the source of information, and process the information carefully and deeply. Unsurprisingly, this allows us to find flaws in the information, dismiss it, and maintain our (potentially erroneous) beliefs. The psychologist Tom Gilovich captures this process elegantly, describing our minds as being guided by two different questions, depending on whether the information is consistent or inconsistent with our beliefs: “Can I believe this?” or “Must I believe this?”  

This goes not just for political beliefs, but for beliefs about science, health, superstitions, sports, celebrities, and anything else you might be inclined (or disinclined) to believe. And there is plenty of evidence that this bias is fairly universal—it is not just a quirk of highly political individuals on the right or left, a symptom of the very opinionated, or a flaw of narcissistic personalities. In fact, I can easily spot the bias in myself with minimal reflection—when presented with medical evidence on the health benefits of caffeine, for instance, I eagerly congratulate myself about my coffee-drinking habits. When shown a study concluding that caffeine has negative health effects, I scrutinize the methods (“participants weren’t randomly assigned to condition!”), the sample size (“40 college-aged males? Please!”) the journal (“who’s even heard of this publication?”), and anything else I can.  

A bit more reflection on this bias, however, and I admit that I am distressed. It is very possible that because of motivated reasoning, I have acquired beliefs that are distorted, biased, or just plain false. I could have acquired these beliefs all while maintaining a sincere desire to find out the real truth of the matter, exposing myself to the best information I could find on a topic, and making a real effort to think critically and rationally about the information I found. Another person with a different set of pre-existing beliefs may come to the opposite conclusion following all of these same steps, with the same sincere desire to know truth. In short, even when we reason about things carefully, we may be deploying this reasoning selectively without ever realizing it. Hopefully, just knowing about motivated reasoning can help us defeat it. But I do not know of any evidence indicating that it will.  

richard_muller's picture
Physicist, UC Berkeley; Author, Now: The Physics of Time

It is said, by those who believe in the devil, that his greatest achievement was convincing the rest of the world that he did not exist. There are two biases that play a similar role in our search for objective knowledge and our goal of making better decisions. These are optimism bias and skepticism bias. Their true threat come from the fact that many people are unaware of their existence. And yet once recognized, you’ll see these biases everywhere. 

Optimism is not only infectious but effective. Enthusiasm is often a requirement for success. During World War II, the U.S. Army Corps of Engineers adopted as a motto an aphorism from a French novelist: “The difficult we do quickly. The impossible takes a little longer.” 

Consider optimism and skepticism bias in the field of energy. Some say solar power is too expensive or too intermittent. Rather than address these directly, we can substitute optimism: let’s let loose American can-do and solve those challenges(In the world today there are no longer any problems, only challenges.) Electric cars charge too slowly? Look at the computer revolution and lose your confining pessimism! Moore’s law will eventually apply to batteries for energy storage, just as it worked for electronicsRemember the Manhattan Project! Remember the Apollo mission! We can do anything if we put our minds to it. Yes we can! 

While renewables are often treated with optimism, nuclear power is attacked with skepticism and pessimism. The problems are too tricky, too technical, too unknown. We can’t trust either industry or the U.S. government with accident safety. Nuclear energy is dangerous and intractable. And we’ll never solve the nuclear waste issue. But is nuclear power truly more intractable than solar, or is there a hidden bias choosing which arguments we bring forth?   

Optimism bias describes the optimism that derives, not from objective assessment, but from a strong like or from a deeply felt hope. The opposite of optimism bias, pessimism bias, derives from dislike or fear. Closely related to pessimism bias, but sounding more thoughtful and positive and harder to counter, is skepticism bias. It is remarkably easy to be skeptical, about anything. Try it. The words will come trippingly to your tongue. The skeptic typically sounds more intelligent, more knowing, more experienced than does the pessimistIf you aren’t skeptical, then I have a bridge in Brooklyn I would like to sell you. 

Closely related to optimism and skepticism bias is confirmation bias. This usually refers to the cherry picking of confirmatory facts, and the discarding of those inconvenient to the desired conclusion. Optimism bias is different. It is not the facts that are important, but the attitude, the positivism, the enthusiasm. If the technology is appealing, then of course we can do it! But if we are suspicious, then bring in all the arguments that were ignored for the appealing technology: the cost, the difficulty, the lack of trust in industry and authority. Be skeptical. 

Skepticism bias is currently affecting discussions of fracking, in particular, the danger of leaked “fugitive” methane with its potent greenhouse warming potential. The bias takes the form of skepticism that we can fix the leaky pipes and machines; that task is portrayed as too difficult, and would require trust. And trust is a tricky and elusive concept, and also be applied with biasIf you don’t like a solution, state that you don’t trust that it will be implemented properly.

Optimism bias is strong on the issue of electric cars. We hope they will work, so we support them. Yet skepticism/pessimism is used against their competitor: 100 mpg conventional autos. Are such vehicles truly as difficult as some argueWe learn in physics that, in principle, horizontal transport need not take any energy. But the same people who are optimistic about batteries are often pessimistic about high mileage gasoline autos. Is that justified 

Both optimism and skepticism bias can be hidden under conviction, an easy substitute for objective analysis. “I just don’t believe that ____ (fill in the blank)Biases often invoke trust or lack of itConviction can be psychologically compelling. Optimism bias drove the original flight to the Moon, but also the largely useless and in my mind failed space station. Optimism bias gave rise to the U.S. government wars on both cancer and poverty. 

In science, skepticism bias infects referee reports for proposed experimental work in science. Luis Alvarez’s major projects were all begun before he had solutions to all the technical issues. He was optimistic, but his optimism was based on his own evaluation of what lay within his capabilities. It could be said that he had earned the right to be optimistic by dint of his past successes. His optimism was not a bias; it was based on capability of addressing new issues as they arise. Optimism can be realistic. And so can pessimism. And skepticism. 

The heart of science is in overcoming bias. The difference between a scientist and layman can be summarized as follows: a layman is easily fooled and is particularly susceptible to self-deception. In contrast, a scientist is easily fooled and is particularly susceptible to self-deception, and knows it. The “scientific method” consists almost exclusively in techniques used to overcome self-deception. The first step in accomplishing this is to recognize that biases exist. The danger of optimism and skepticism bias (like the danger of the devil—for people who believe in such things) is that so many people are unaware of its existence

kai_krause's picture
Software Pioneer; Philosopher; Author, A Realtime Literature Explorer

Humans have fewer than two legs... 
...on average. 

It takes a moment to realize the logic of that sentence... 

Just a single "one-legged pirate" moves down the average for all of mankind, to just a fraction under two. Simple and true—but also counter-intuitive. 

A variation moving the average upwards instead: 

Billionaire walks into a bar. 
And everyone is a millionaire... 
...on average.

That's all rather basic statistics, but as obvious as it might seem, amongst the abundance of highly complex concepts and terms in this essay collection, many scientific truths are not easily grasped in everyday life and the basic tools of understanding are woefully underutilized by the general public. They are badly taughtif indeed they are part of the curriculum at all.  

Everyone today is totally surrounded by practical math. Leaving high school it should be standard issue knowledge to understand how credit cards work, compounding interest, or mortgage rates. Or percentage discounts, goods on sale and, even more basic, a grasp of numbers in general: millions, billions, trillions.  How far the moon is from earth, the speed of light, the age of the universe. 

Following the news, do people really get what all those zeros mean in the National Debt? Or the difference between "Gross National Product" versus "Gross Domestic Income"? 

Do they know the population of the US, versus, say "Nigeria"( Hint: Nigeria is projected to overtake the US as the third largest nation, reaching 400 million by 2050. 

If you ask around, your family and friends, who could give you a rational description of  "Quantum Computing," the "Higgs Boson" or how and why "Bitcoin mining" works?   

Millions of people are out there playing lotteries. They may have heard that the odds are infinitesimal, but often the inability to deal with such large numbers or tiny fractions turns into an intuitive reaction. I have had someone tell me in all honesty:  

"Yes it is a small chance, but I figure it's like '50-50'. Either 'I win'.... or 'I don't.'" 

Hard to argue with that logic. 

And yet—the cumulative cost of such seemingly small items is tremendous. Smoking a pack a day can add up to more than leasing a small car. Conversely, the historically low interest rates do allow leveraging possibilities. Buying a house, doing your taxes, it all revolves around a certain comfort level with numbers and percentages, which surprisingly large portion of us are shrugging off or acting haphazardly. 

Not quite getting the probabilities throwing dice and drawing cards is what built Vegas, but what about the odds in rare diseases or the chances of accidents or crime?  

The general innumeracy (nod to Hofstadter and Paulos) has far-reaching effects. 

There is a constant and real danger in being manipulated, be it by graphs with truncated Y axes, or pies that go beyond 100%, hearing news of sudden greater chances to die of some disease, or the effectiveness of medication, or so many of the tiny footprint notes in advertising claims.  It requires a minimum level of awareness to sort through these things. Acquiring common sense needs to include math. 

So which scientific term or concept ought to be more widely known? 

I would plead to start with a solid foundation for the very basics of science and math and raise the awareness, improve the schooling, better the lives of our kids.... on average. 

janna_levin's picture
Professor of Physics and Astronomy, Barnard College of Columbia University; Author, Black Hole Survival Guide; Director of Sciences, Pioneer Works

Complexity makes life interesting. A universe of just Hydrogen is quite bland, but the helpful production of Carbon in stellar cores allows for all kinds of chemical connections.  A universe of just two dimensions is pretty limited, but live in at least three and enjoy the greater range of motion and possible spatial permutations. Sitting on a bench in my friend’s garden in California, there’s a lot to look at. The visual information filling my field of view is incredibly complicated. The dry winter leaves trace vortices in the air’s motion. Plants respire and we breathe and the neural connections fire and it’s all complex and interesting. The physicist’s job is to see through the overwhelming intricacy and find the rallying, organizing principle.  

Everything in this garden, from the insects under the rocks to the blue dome overhead to the distant stars washed out by the sunlight can be traced to a remarkably lean origin in a big bang. Not to overstate the case. There’s much we don’t understand about the first trillionth of a trillionth of a trillionth of a trillionth of a second after the inception of our universe. But we can detail the initial 3 minutes with decent confidence and impressive precision 

Our ability to comprehend the early universe that took 13.8 billion years to make my friend’s yard is the direct consequence of the well-known successes of unification. Beginning with Maxwell’s stunning fusion of electricity and magnetism into one electromagnetic force, physicists have reduced the list of fundamental laws to two. All the matter forces—weak, electromagnetic, and strong—can be unified in principle (though there are some hitches). Gravity stands apart and defiant so that we have not yet realized the greatest ambition of theoretical physics: the theory of everything, the one physical law that unifies all forces, that pushes and prods the universe to our current complexity. But that’s not the point. 

The point is that a fundamental law is expressible as one mathematical sentence. We move from that single sentence to the glorious Rube Goldberg machine of our cosmos by exploiting my favorite principle, that of least action. To find the curves in spacetime due to matter and energy, you must find the shortest path in the space of possibilities. To find the orbit of a comet around a black hole, you must find the shortest path in the curved, black-hole spacetime 

More simply, the principle of least action can be stated as a principle of least resistance. If you drop a ball in mid air, it falls along the shortest path to the ground, the path of least resistance under the force of gravityIf the ball does anything but fall along the shortest path, if it spirals around in widening loops and goes back up in the air, you would know that there are other forces at work—a hidden string or gusts of air. And those additional forces would drive the ball along the path of least resistance in their mathematical description. The principle of least action is an old one. It allows physicists to share the most profound concepts in human history in a single line. Take that one mathematical sentence and calculate the shortest paths allowed in the space of possibilities and you will find the story of the origin of the universe and the evolution of our cosmological ecosystem. 

jimena_canales's picture
Writer and faculty at the Graduate College, University of Illinois-Urbana Champaign; Author, The Physicist and the Philosopher

Carl Sagan spoke too soon when he spoke about demons. Modern science, he told his numerous followers, banished witches, demons and other such creatures from this world. Simply spread flour on the floor and check for suspicious footprints—this kind of reasoning, he claimed in The Demon-Haunted World, characterizes sound, scientific thinking.

So why is Maxwell’s demon still on the frontlines of science? Since he was first conjured in 1874, modern day inquisitors have valiantly chased after him with math and physics instead of holy water. But rather than going the way of the phlogiston and the ether, he has emerged unscathed and is now a fixture in standard physics textbooks. Lest you think he is real, let me tell you he is not. But a demon he is in spades. He sorts, and sorting is at the origin of sorcery.

Neat-fingered and vigilant, he can reverse time, momentarily violate the second law of thermodynamics, power a perpetual motion machine, and generate pockets of hellish heat in substances that should otherwise reach temperature equilibrium. His smarts are debatable, matching those of a virtuoso piano player or those of a humble switchman on railway tracks. No offense is taken if he is compared to a simple valve. On the contrary, it means he can do a lot of work with minimum effort. Frequently portrayed as holding a cricket bat (to send molecules to-and-fro), manning a trapdoor (to let them pass or keep them out) and holding a torch, a flashlight or a photocell (to be able to see in the dark), this miniscule leviathan can wreak havoc.           

He was named after the Scottish scientist James-Clerk Maxwell, known for his theory of electromagnetism, by his colleague William Thomson (aka Lord Kelvin). Almost immediately after his public debut, he was exorcized. Sightings of his demonic activity were brushed away as statistical anomalies or insignificantly tiny—no need to worry, we were told.          

But science can, and often does, turn imaginary beings into real things. Maxwell’s demon is a case in point. Since he was first conjured, scientists have tried to bring him to life repurposing twisted metal, ratchets and gears, molecules, enzymes and cells, and even electronics and software. In 1929 the physicist Leo Szilard published a paper about him in the prestigious Annalen der Physik (which reached fame as “Szilard’s exorcism”) before pairing up with Einstein to patent a refrigerator that would momentarily and locally reverse entropy. Another milestone took place in 2007, when an article in Nature titled “A Demon of a Device” described some of the first successful molecular nanomotors. At that time, Sir Fraser Stoddart considered it as auguring an entirely new approach to chemistry; he went on to win the Nobel Prize for Chemistry in 2016. Artificial intelligence is another technology with a connection to Maxwell’s being.

Maxwell’s demon is neither the first nor the last, but he is certainly the most well known of all science’s demons. Descartes’s demon preceded him by more than two centuries. Like a master illusionist who can take over your sense of reality by throwing a cloak over your head, this genie can intercept your sense-impressions and take over from there. He remains the patron saint of virtual reality. Laplace’s demon comes next: the master calculator who can relate every particle to the laws of motion and who can know the past and the future has inspired advances in supercomputing and Big Data. The so-called “colleague of Maxwell’s demon,” appeared a few decades after the original one. His arriviste career included travelling faster than light and helping to explain quantum entanglement. By sharing with biblical shedim the power of instantaneous locomotion, he can intercept messages, mess with causality, and fiddle with time. These shape-shifting masters of disguise are getting smarter and more powerful. By taking on the name of the scientists who conjured them, they dutifully fulfill their patronymic destiny, reappearing regularly as spick-and-span newborns with the sagacity of old men.

At the turn of the nineteenth century the French mathematician Henri Poincaré claimed he could almost see Maxwell’s demon through his microscope, calculating that with a little patience (of millions of millions of centuries) his mischief would be evident for everyone.  While some deemed him too small to really matter for us, others noted that small causes were known to produce great effects, like the spark or the pebble starting an avalanche. Other scientists argued that the universe as a whole was so large that he could confidently reign like a master in a confined territory. For Norbert Wiener, founder of cybernetics, it was clear that “there is no reason to suppose that metastable demons do not in fact exist.” The physicist Richard Feynman wrote an entire article explaining why, when and how he would eventually tire. According to Isaac Asimov, everything that appeared to us as if arriving by chance was really because “it is a drunken Maxwell’s demon we are dealing with.”

It is perhaps most ironic that the philosopher Sir Karl Popper, known for describing the process of scientific progress as one based on hypothesis-creation and falsification, admired his ability to survive every assassination attempt against him: “Although innumerable attempts have been made on his life, almost from the day he was born, and although his non-existence has frequently been proved, he will no doubt soon celebrate his hundredth birthday in perfect health and vigor.”

Demons are here to stay. The reassuring predictions of scientists such as Carl Sagan have proved to be overly premature. In our hurry to make our world modern and base it on purely secular reason, we have failed to see the demons in our midst.

koo_jong-a's picture
Artist

The development of meaning in cultural environment has been translated, regenerated, and repeated as well as in natural phenomena under the creative process in our life by compaction and expansion. 

The world of ousss is one of them.

They have been developed throughout the public sector, private sector, scientific, literature and musical sector nevertheless that are only in ones poetric consequences.

Society or a part of nature repeats by shrinking and collected power to grow and therefore the expansion would occur, like the plantation needs to make a gnarl to be grown and it needs to collect the complexity of the elements to make itself, following by the further growth of the branch.

The middle age was one of the compaction where the depth of the pre renaissance was made, and the 20’s historical immigration from the European countries to the united states flooded the uncountable quantity of the intellectual people’s movement in America and this preconditioned to create most of the inventions we are using everyday, like an internet, electricity, light bulbe, telephone, google, car, airplane, etc. 

The astronomer Vesto Slipher was one of the first to suggest that the universe is bigger than our galaxy.

In the universe, the beginning of the star are the gases, the different quantities of the elements crashes each other by their own internal pressure become a certain heavier chemical elements billions of years later, it gains the gravity after a quantity constitutes it. When a star repeats the formation by the gravitational collapse, it enters the main sequence and starts to burn hydrogen in its core that derived a mass luminosity. 

The mass luminosity in the universe or in a society occurs only when the certain formation has made like our period, we are willing to burn to shine for the further development that are far to go. 

nina_jablonski's picture
Biological Anthropologist and Paleobiologist; Evan Pugh University Professor of Anthropology at Pennsylvania State University

The term, “media richness,” was first described in the context of media richness theory (MRT) by Richard Daft and Robert Lengel in 1986. Media richness describes the density of learning that can be conveyed through a specified communications medium. Face-to-face communication is the richest medium according to MRT because it allows for the simultaneous interpersonal exchange of cues from linguistic content, tone of voice, facial expressions, direction of gaze, gestures, and postures. MRT was developed prior to the rise of electronic communication media in order to help managers in business contexts decide which medium was most effective for communicating a message. Rich media like conversations and phone calls were deemed best for non-routine messages, while lean media like unaddressed memoranda were considered acceptable for routine messages. In the last two decades, media richness has been extended to describe the strengths and weaknesses of new media from email to websites, video conferencing, voicemail, and instant messaging. Media richness deserves to be more widely known because people make choices throughout a day about communications media often without considering the consequences of the choice of medium, and the goodness of fit between the content of a message and the medium through which it is being communicated.

Humans evolved in media-rich contexts. Living in stable, tightly knit social groups, face-to-face communication was the only mode of communication for hundreds of thousands of years. Until about 5,000 years ago, the concept of media choice didn’t exist because—apart from smoke signals—it was face-to-face or nothing. Articulate speech and language complemented the rich repertoire of vocalizations, facial expressions, glances, stares, gestures, and postures, and upon which our ancestors relied, thus creating a rich and potentially highly nuanced communications repertoire. Within small groups, people attended closely to what was said, who said it, and how it was said. Pleasantries were exchanged, advice was given, loving whispers traded, and admonitions delivered with a full sensory armada of verbal content, tone of voice, measured eye contact, gesture, and posture. People were mostly bathed in conversation, reassured by touch, verbally upbraided for unreasonableness, publicly shamed by calculated stares, and physically reprimanded for antisocial behavior. Although communication between people has probably never been without the potential for guile and social manipulation, deception was hard to pull off because information flowed through multiple visual, auditory, and even tactile and olfactory channels. Communication had immediate effects and consequences. One-to-many communication reached only as far as the human voice could carry.

The consequences of media richness and the concept of media choice became relevant for people only with the introduction of writing in early agricultural societies. Initially developed to facilitate clerical and payroll functions, writing was soon marshaled in support of military, political and religious causes, and much later for the exchange of personal information and the composition of poetry and philosophical treatises. Communication through writing was augmented early in the 20th century by modes of remote voice communication (telephone, microphone, and radio), and later by combined visual and auditory modes such as movies, television, and websites which provided unprecedented scope for unidirectional communication. Historians and scholars of communication theory talk about tradeoffs between the richest face-to-face channels and the leaner modalities of email, voicemail, and text messaging that provide fewer cues, slower feedback, and limited scope for retribution. Media richness has been criticized in recent years because it fails to predict why people choose lean over rich media, especially in situations where a richer medium clearly would be more effective. Many will have experienced the shock of an email informing about the death of a loved one or have been anguished by a misplaced comma or inappropriate emoticon in a text message rejecting an overture of friendship or love. This is exactly why media richness is important and interesting from an evolutionary perspective.

In a world of unconstrained media choice, people often choose leaner and functionally unidirectional modalities because they want to make a point, or at least think that they want to make a point. A need for incessant and immediate connection (or just the need to save money) can provoke the blurting of something through a cheap, low-grade channel rather than waiting for the chance to use a richer one. Leaner media also carry lower risk of rejection or immediate retribution. Regardless of the reason, we now live in a world where people are opting for leaner modes of communication because they have been socialized inadequately in richer ones and are functionally ignorant of the concept of media richness. The scope for misunderstanding has never been greater, while the opportunities for providing physical comfort and solace or for exacting meaningful and appropriate retribution have never been more limited. We still yearn to see one another, but contacts often consist more of broadcast performances of static faces and less of breathing exchanges of shared wonder, love, tribulation, and loss.

Like all primates, humans have nurtured harmonious relationships and maintained social cohesion by being intensely good, high-bandwidth communicators. Media richness is a concept worthy of wider propagation because it will help insure the future of individuals and societies in times of increasing individual social isolation, electronic bullying, touch aversion, personal anxiety, and social estrangement.

linda_wilbrecht_1's picture
Associate Professor, UC Berkeley Department of Psychology and Helen Wills Neuroscience Institute

If you moved from the United States to France as a child you would likely become fluent in French in a short period of time, but if you moved to France as an adult you might never become fluent. This difference in the capacity to learn language exists because there are sensitive periods in development when the brain is particularly plastic and able receive and retain information with greater efficacy.

There is a well-established field of sensitive period biology that seeks to explain how people learn to speak, how birds learn to sing, and how our sensory systems wire up among other things. The field has been particularly successful in explaining how the brain coordinates the information streaming in from the two eyes to allow binocular vision useful for depth perception. In the last century, it was discovered that when a person was born with a “lazy” eye or had their vision clouded in one eye by a cataract then their binocular vision would be impaired for a lifetime. However, if a correction was made in early life, then the brain and binocular vision could recover to develop normally. This human phenomenon can be modeled in rodents by closing one eye in early life. Extensive study of this model now provides the basis for our understanding of the cellular mechanisms regulating sensitive periods across the cortical regions of the brain.

You might conclude from the brief descriptions above that early experiences are simply the most powerful. The juvenile brain is, in general, more plastic than the adult brain. However, the often glossed over details show that the younger brain is not always more sensitive to experience than the older brain. When the biology can be studied in carefully controlled laboratory experiments, we find that periods of greater sensitivity are often delayed, perhaps even timed, until the incoming experience is appropriate to sculpt the brain. For example, the peak of sensitive period plasticity for the development of binocular vision occurs about one month after birth in rodent brains, which is more than a week after eye opening. Scientists are still working on the why and how. Nonetheless, it is clear that the brain can and does hold highly sensitive plasticity under wraps and then unveils it when appropriate. It is thought that years of evolution have sculpted brain development not only to be experience-dependent, but also carefully timed such that it is experience-expectant. That is, dormant until needed.

What this means for the big picture is that it is likely that human development involves a staggered sequence of undiscovered sensitive periods stretching late into the second or even third decade of life. Hence, we should be on the lookout for “sleeper” sensitive periods. For example, there may be teenage social sensitive periods where we learn to interact with peers or cognitive sensitive periods where we sculpt our decision making style. These sensitive periods might be timed to overlap with important transitions like when we leave our parent’s protection to explore the world, when we go through puberty, when we become a parent. The boundaries may be sharp triggered by events like puberty onset, or gradual slopes that rise and fall with age and experience. We do not yet know when, where and how these more subtle cognitive and emotional sensitive periods may work.

It may be easier to see evidence of complex sensitive periods in development in other species. Life history ecologists have identified a wide array of non-human species that adapt their phenotype according to the sampled statistics of their particular environment. For example, if developing crickets are exposed to spiders in the environment, then the adult crickets are better at surviving where there are spiders. If food is scarce during development for a species of mite, then an alternate body type and foraging strategy may be used in adulthood. However, less is known about the neurobiology of these phenomenon in these non-mammalian species.

Sensitive period biology may, in future, provide important insights into understanding and preventing mental illness. Sensitive period plasticity provides “adaptation” to experience, but this adaptation does not ensure the outcome is optimal or even favorable. For example, negative experience during a sensitive period could generate a persistent negative bias in the processing of events, potentially leading toward mental illness. It is currently known that negative experiences do have different impacts at different ages in humans and animal models, but we do not know with great clarity when it is better or worse to endure negative experience and why.

Sensitive period biology may also influence behaviors commonly thought to make up a person’s personality. Experience at different times might alter someone’s appetite for risk, their tolerance for delayed gratification, or kindle their interest in music. It is possible that experience of poverty even during a brief window of development could alter the brain and behavior for a lifetime. When money is available for educational or public health intervention, knowledge of sensitive period biology should become a central aspect of strategy. If sleeper sensitive periods exist in late childhood or teenage years then these periods may become more efficient target years.

eldar_shafir's picture
William Stewart Tod Professor of Psychology and Public Affairs Ph.D., Princeton University; Co-author, Scarcity

Here is a trivial fact about our mental lives. So trivial, it is rarely even noticed. And it is hard to talk about without sounding sophisticated. The post-modernists have explored versions of it, but the notion I mean to promote—the notion of “construal”—is painfully obvious. It refers to the fact that our attitudes and opinions and choices pertain to things not as they are in the world, but as they are represented in our minds.

Economic theorizing presumes that people choose between options in the world: Job A versus Job B, or Car A versus Car B. From the point of view of a psychologist, however, that presumption is really quite radical: When a person is presented with a choice between options A and B, she chooses not between A and B as they are in the world, but rather as they are represented by the 3-pound machine she carries behind the eyes and between the ears. And that representation is not a complete and neutral summary, but rather a selective and constructed rendering—a construal.

There is, of course, no way around it. The behaviorists tried to avoid it by positing that behaviors were direct responses to stimuli, that mental life didn’t interfere in relevant ways. But, clearly, that’s not the case. We now know a lot about our rich mental lives, which shape and mold what we experience, making construal not neutral. A food when it is 10% fat is less appetizing than when it is 90% fat free. A risky venture that entails some lives saved and others lost is a lot more appealing when our attention is directed towards the lives saved than the lives lost. Our attempts to elicit empathy for global catastrophes are ineffective—a phenomenon referred to as “psychic numbing”—partly because our construal processes are not able to trigger differential indignation for outcomes as a function of their gravity.

Visual illusions provide a compelling illustration, where our experience of the object simply does not conform to the actual object in the world. Susan Sontag famously observed that “to photograph is to frame, and to frame is to exclude.” In fact, the mind is a lot messier than a camera. We don’t merely choose where to look—our minds influence what we see. And they influence what we see both when we think fast and when we think slow; both when we respond impulsively, without conscious thought, and when we deliberately choose what to take seriously and what to ignore.

Construal lies at the core of behavioral economics. Violations of standard rationality assumptions arise not from stupidity, computational limitations or inattention, but from the simple fact that things in the world, depending on how they are described or interpreted, get construed differently, yielding inconsistent judgments and preferences.

Real world options, like automobiles, houses, job offers, potential spouses, all come in multiple attributes. How much weight we give each attribute is largely a function of where our attention is directed, our pet theories, what we expect or wish to see, the associations that come to mind. One rule of construal is that things are judged in comparative rather than absolute terms. How water feels to the hand depends on whether the hand had previously been in colder or warmer water. In the delivery room, a doctor’s decision of whether to perform a caesarian section depends on the gravity of immediately preceding cases.

Knowledge in the form of scripts, schemas, and heuristics, serves to make sense of stimuli in ways that transcend what is given. What we experience is determined not simply by the objective building blocks of the situation, but by what we know, care about, attend to, understand, and remember. And what we care about, attend to, and remember is malleable. In one study, participants were invited to play a Prisoner’s Dilemma game, referred to as either the Wall Street or the Community game. While the payoffs and set-up were identical, the mere label altered participants’ construal, changing their tendency to cooperate or to defect.

Psychological costs and subsidies also enter people’s construal, and are quite different from the financial costs and subsidies policy makers are typically concerned with. In one well-known study, when fines were introduced for picking up children late from daycare, parents were more likely to pick up their children late. Parents who had previously felt bad—had incurred a psychic cost—for showing up late now construed the fine as a contract—paying a fee entitled them to late pick-up.

Psychologists see it as an integral feature of human cognition, but if your aim is to influence behavior, construal presents a difficult challenge. The difference between success and failure often boils down to how things are construed. Although similar from an accounting point of view, Earned Income Tax Credit (EITC)—in contrast with TANF and other forms of welfare—has been an effective form of government assistance. This is attributed to the construal of EITC as just reward for labor, delivered in the form of a tax refund check, rather than a separate assistance payment. It is seen as an entitlement rather than welfare, designating beneficiaries as taxpaying workers, rather than “on the dole.”

Construal needs to be more widely appreciated because so much thinking and intuition, in policy and in the social sciences, tends to focus on actual circumstances as opposed to how they are construed. The words that make up this essay are just words. It’s partly their construal that will make some readers think they’re useful and others think they’re of little use. 

maximilian_schich's picture
Associate Professor in Arts and Technology, The University of Texas at Dallas

Commonly, confusion denotes bewildering uncertainty, often associated with delirium or even dementia. From the confusion of languages in the Genesis of the bible, to Genesis the band, broader audiences mostly encounter negative aspects of confusion. This short text aims to shed a different light on the concept: Confusion that can be both positive and negative, sometimes both at the same time; Confusion as a subject of scientific interest; Confusion as a phenomenon that can't be ignored, that requires scientific understanding, and that needs to be designed and moderated.

A convenient tool to measure confusion in a system is the so-called confusion matrix. It is used in linguistics and computer science, in particular machine learning. In principle, the confusion matrix is a table, where all criteria in the dimension of rows are compared to all criteria in the dimension of columns. A simple example is to compare all letters of the alphabet spoken by an English native, with the letters actually perceived by a German speaker. An English e will often be confused with the German i, resulting in a higher value in the matrix where the e row crosses the i column. Ideally, of course, letters are only confused with themselves, resulting in high values exclusively along the matrix diagonal. Actual confusion, in other words, is characterized by patterns of higher values off the matrix diagonal.

Unfortunately, one may say, the use of the confusion matrix is still mostly governed by what Richard Dawkins calls "the tyranny of the discontinuous mind." Processing the confusion matrix, scholars mostly derive secondary measures to quantify type-I and type-II errors, i.e. false positives and false negatives, as well as a number of similarly aggregate measures. In short, the confusion matrix is used to make classification by humans and artificial intelligence less confusing. A typical, and of course very useful example, is to compare a machine classification of images with the known ground truth. No doubt, quantifying the confusion of ducks and alligators, just like pedestrians and street signs is a crucial application that can save lives. Similarly, it is often useful to optimize classification systems in order to minimize the confusion of human curators. A good example would be the effort of the semantic web community to simplify global classification systems, such as the UMBEL ontology or the category system in Wikipedia, to allow for easy data collection and classification with minimal ambiguity. Nevertheless, the almost exclusive focus on optimization by minimizing confusion is unfortunate, as perfect discreteness of categories is not desirable in many real systems, from the function of genes and proteins to individual roles in society. Too little confusion between categories or groups and the system is in essence dead. Too much confusion and the system is overwhelmed by chaos. In a social network, total lack of confusion annihilates any base for communication between groups, while complete confusion would be equivalent to a meaningless cacophony of everything meaning everything.

Network science is increasingly curious regarding this situation, dealing with confusion using the concept of overlap in community finding. Multi-functional molecules, genes and proteins, for example, act as drugs and drug targets, where confusion needs to be moderated in order to hit the target, while minimizing unwanted side effects. Similar situations arise in social life. Only recently it has become possible in network science to deal with such phenomena in an efficient way. Network science initially mostly focused on identifying discrete communities, as finding them is much more simple in terms of computation. In such a perfect world where all communities are discrete, there is no confusion, or, one should better say, confusion is ignored. In such a perfect world the confusion or co-occurrence matrix can be sorted so that all communities form squares or rectangles along the matrix diagonal. In a more complicated case, neighboring communities are overlapping, forming sub-communities in between two almost discrete communities, say all people belonging to the same company while also belonging to the same family. It is easy to imagine more complicated cases. At the other end of the spectrum we find all-out complex overlap, which is hard to imagine or visualize in terms of sorting the matrix. It may well be true, however, that complex overlap is crucial for the survival of the system in question.

There is a known case, where confusion by design is desirable. A highly cited concept in material science, which was introduced 23 years ago in a news item, Greer's so-called principle of confusion applies to the formation of metallic glass. In short, the principle states that using a greater variety of metal atoms to form a glass is more convenient due to the resulting impurities giving the material less chance to crystallize. This allows for larger objects of glass with interesting material properties, such as being stronger than steel. The convenience of larger confusion is counter-intuitive, as it is increasingly harder to determine the material properties of a glass the larger the variety of metals involved. It would not be surprising to see something like Greer's principle of confusion applied to other systems as well.

While such questions await solution, as a take home, we should expect critical amounts of confusion in many real life systems, with the optimum in between but not identical with perfect discreteness or perfect homogeneity. Further identifying, understanding, and successfully moderating patterns of confusion in real systems is an ongoing challenge. Solving this challenge is likely essential in a great variety of fields, from materials and medicine to social justice and the ethics of artificial intelligence. Science will help us to clarify, if possible to embrace, and if necessary to avoid confusion. Of course, we should use caution, as the moderation of confusion can be used for peace and war, much like the rods in a nuclear reactor—with the difference that switching off confusion in a social system may be just as deadly.

brian_knutson's picture
Professor of Psychology and Neuroscience; Stanford University

Five years from now, what will your future self think of your current self? Will you even be the same person?

Over a century before the birth of Christ, Bactrian King Milinda challenged the Buddhist sage Nagasena to define identity. The Buddhist responded by inquiring about the identity of the king’s chariot: “…is it the axle? Or the wheels, or the chassis, or reins, or yoke that is in the chariot? Is it all of these combined, or is it apart from them?” The king was forced to concede that his chariot’s identity could not be reduced to its’ pieces. Later, Greek scholar Plutarch noted that the passage of time further complicates definitions of identity. For example, if a ship is restored piece by piece over time, does it retain its’ original identity? As with Theseus’ reconstructed ship, the paradox of identity applies to our constantly regenerating bodies (and their resident brains). Flipping in time from the past to the future, to what extent can we expect our identity to change over the next five years?

Considering the future self to be an entirely different person could have serious consequences. Philosopher Derek Parfit worries that people who regard the future self as distinct should logically have no more reason to care about that future self than a stranger. By implication, they should have no reason to save money, maintain their health, or cultivate relationships. But perhaps there is a middle ground between self and stranger. To the extent that someone imagines their future self to be similar to their present self, this sense of “future self-continuity” might predict their willingness to at least consider the interests of the future self.

Here, I argue that beyond rekindling philosophical debates, future self-continuity is a critical (and timely) scientific concept, for a number of reasons. First, future self-continuity can be measured. Remarkably, neuroimaging research suggests that a medial part of the frontal cortex shows greater activity when we think about ourselves versus strangers. When we think about the future self, the activity falls somewhere in between. The closer that activity is to the current self, the more willing individuals are to wait for future rewards (or to show less “temporal discounting” of future rewards). More conveniently, researchers can also simply ask people to rate how similar or connected they feel to their future selves (e.g., in five years). As with neural measures, people who endorse future self-continuity show less temporal discounting, and have more money stashed in their savings accounts.

Second, future self-continuity matters. As noted, individuals with greater future self-continuity are more willing to wait for future rewards—not just in the laboratory, but also in the real world. Applied research by Hal Hershfield and others suggests that adolescents with greater future self-continuity show less delinquent behavior, and that adults with greater future self-continuity act more ethically in business transactions. Future self-continuity may even operate at the group level, since cultures that value respect for elders tend to save more, while nations with longer histories tend to have cleaner environments.

Third, and most importantly, future self-continuity can be manipulated. Simple manipulations include writing a letter to one’s future self, whereas more sophisticated interventions involve interacting with digital renderings of future selves in virtual reality. These interventions can change behavior. For example, adolescents who write a letter to their future selves make fewer subsequent delinquent choices, and adults who interact with an age-progressed avatar later allocate more available cash towards retirement plans. While the active ingredients of these manipulations remain to be isolated, enhancing the similarity and vividness of future self representations seems to help. Scalable future self-continuity interventions may open up new channels for enhancing health, education, and wealth.

The need for future self-continuity continues to grow. On the resource front, people are living longer while job stability is decreasing. In the face of increasing automation, institutional social safety nets are shrinking, forcing individuals to bear the full burden of saving for their futures. And yet, in the United States, saving has decreased to the point where nearly half of the population would have difficulty finding $400.00 to cover an emergency expense. On the environmental front, global temperatures continue to rise to unprecedented levels—along with attendant droughts, increases in sea levels, and damage to vulnerable ecologies. Human choice has a hand in these problems. Perhaps increased future self-continuity—in individuals as well as policymakers—could generate solutions.

The dawn of a new year is as good a time as any to take the perspective of your future self. Imagine yourself in five years. Did you do everything you could today to make the world a better place—both for your present and future selves? If not, what can you change?

athena_vouloumanos's picture
Associate Professor of Psychology, Director, NYU Infant Cognition and Communication Lab, New York University

Clear instruction is essential for learning. But even the clearest instruction can be of limited use, if the learner is not at the right place to receive it. Psychologist Lev Vygotsky had a remarkable insight about how we learn. He coined the term zone of proximal development to describe a sweet spot for learning in the gap between what a learner could do alone, and what that learner could do with help from someone providing knowledge or training just beyond the learner’s current level. With such guidance, learners can succeed on tasks that were too difficult for them to master on their own. Crucially, guidance can then be taken away, like scaffolding, and learners can succeed at the task on their own.

The zone of proximal development introduces three interesting twists to cognitive scientists’ notions of learning. First, it might lead us to reconsider notions of what a person “knows” and “knows how to do.” Instead, conceptualizing peak knowledge or abilities as a learner’s current maximal accomplishments under guidance directs our attention to people’s potential for learning and growth, and helps us avoid reifying test scores and grades. Second, it introduces the idea of socially constructed knowledge, created in the interstitial space between the learner and the person providing guidance. Thinking about knowledge as an act of dynamic creation empowers teachers and learners alike. Third, it provides a nuanced caveat to findings showing that explicit instruction can actually make learning worse in some situations. Recent studies show that novices given instruction generated less creative solutions than novices engaged in unguided discovery-based exploration, but the zone of proximal development reminds us that the nature of the instruction relative to the learners’ state of readiness matters.

The zone of proximal development needs to be more widely known to parents, teachers, and anyone learning anything new (which hopefully includes all of us).

Teachers who understand students’ current knowledge state can present new information that takes students just beyond it, to a new level of understanding. Subtraction could then be introduced using simpler terms like “taking away” to some students, and in terms of a number line to students with a more developed number sense.

Parents who understand their children’s current abilities can give specific guidance by, say, verbally instructing a child to look at the straight edges of puzzle pieces to understand which pieces belong on the outside, or physically demonstrating how two puzzle pieces can interlock. Whereas encouraging children with generic praise can help them persevere, giving specific verbal or physical guidance in the child’s zone of proximal development can help children learn to solve puzzles on their own. 

david_rowan's picture
Editor, WIRED UK

For all its fiber-enabled, live-video-streaming, 24/7-connected promise, our information network encapsulates a fundamental flaw: it’s proving a suboptimal system for keeping the world informed. While it embraces nodes dedicated to propagating a rich seam of information, because the internet’s governing algorithms are optimized to connect us to what they believe we are already looking for, we tend to retreat into familiar and comfortably self-reinforcing silos: idea chambers whose feeds, tweets and updates inevitably echo our pre-existing prejudices and limitations. The wider conversation, a precondition for a healthy intellectual culture, isn’t getting through. The signals are being blocked. The algorithmic filter is building ever-higher walls. Facts are being invalidated by something called “post-truth.” And that’s just not healthy for the quality of informed public debate that Edge has always celebrated.

Thankfully a solution is suggested by neural networks of a biological kind. Inside our brains, no neuron ever makes direct contact with another neuron; these billions of disconnected cells pursue their own individual agendas without directly communicating with their neighbors. But the reason we are able to form memories, or sustain reasoned debate, is that the very gaps between these neurons are programmed to build connections between them. These gaps—called synapses—connect individual neurons using chemical or electrical signals, and thus unite isolated brain cells into a healthy central and peripheral nervous system. The synapses transfer instructions between neurons, link our sense receptors to other parts of our nervous system, and carry messages destined for our muscles and glands. Truth be told, without these unheralded gap-bridging entities called synapses, our disconnected brain cells would be pretty irrelevant.

We need to celebrate the synapse for its vital role in making connections, and indeed to extend the metaphor to the wider worlds of business, media and politics. In an ever-more atomized culture, it’s the connectors of silos, the bridgers of worlds, that accrue the greatest value. And so we need to promote the intellectual synapses, the journalistic synapses, the political synapses—the rare individuals who pull down walls, who connect divergent ideas, who dare to link two mutually incompatible fixed ideas in order to promote understanding.

Synaptic transfer in its scientific sense can be excitatory (encouraging the receiving neuron to forward the signal), or inhibitory (blocking the receiving neuron from further communicating the message). Combined, these approaches ensure a coherent and healthy brain-body ecosystem.  But as we promote the metaphorical sense of synaptic transfer, we can afford to be looser in our definition. Today we need synapse-builders who break down filter bubbles and constrained world-views by making connections wherever possible. These are the people who further healthy signalling by making unsolicited introductions between those who might mutually benefit; who convene dinner salons and conferences where the divergent may unexpectedly converge; who, in the Bay Area habit, “pay it forward” by performing favors that transform a business ecosystem from one of hostile competitiveness to one based on hope, optimism and mutual respect and understanding.  

So let’s re-cast the synapse, coined a century ago from the Greek words for “fasten together,” and promote the term to celebrate the gap-bridgers. Be the neurotransmitter in your world. Diffuse ideas and human connections. And help move us all beyond constrained thinking.

ursula_martin's picture
Professor of Computer Science, University of Oxford

Open up Ada Lovelace’s 1843 paper about Charles Babbage’s unbuilt Analytical Engine, and, if you are geek enough, and can cope with long 19th-century sentences, it is astonishingly readable today.

The Analytical Engine was entirely mechanical. Setting a heavy metal disc with ten teeth stored a digit, a stack of fifty such discs stored a fifty-digit number, and the store, or memory, would have contained 100 such stacks. A basic instruction to add two numbers moved them from the store to the mill, or CPU, where they would be added together, and moved back to a new place in the store to await further use: all mechanically. It was to be programmed with punched cards, representing variables and operations, with further elaborate mechanisms to move the cards around, and reuse groups of them when loops were needed. Babbage estimated that his gigantic machine would take three minutes to multiply two twenty-digit numbers.

The paper is so readable because Lovelace describes the machine, not in terms of elaborate ironmongery, but using abstractions—store, mill, variables, operations and so on. These abstractions, and the relations between them, capture the essence of the machine, in identifying the major components and the data that passes between them. They capture, in the language of the day, one of the core problems in computing then and now, that of exactly what can and cannot be computed with different machines. The paper identifies the elements needed “ to reproduce all the operations which intellect performs in order to attain a determinate result, if these operations are themselves capable of being precisely defined” and these—arithmetic, conditional branching and so on—are exactly the elements that Turing needed one hundred years later to prove his results about the power of computation.

You can’t point to a variable or an addition instruction in Babbage’s machine—only to the mechanical activities that represent them. What Lovelace can only tackle with informal explanation was made more precise in the 1960s when computer scientists such as Oxford’s Dana Scott and Christopher Strachey used separate abstractions to model both the machine and the program running on it, so that precise mathematical reasoning could predict its behavior. These concepts have become further refined as computer scientists like Samson Abramsky seek out more subtle abstractions using advanced logic and mathematics to capture not only classical computers, but quantum computation as well.

Identifying a good abstraction for a practical problem is an art as well as a science, capturing the building blocks of a problem, and the elements connecting them, with just the right amount of detail, not too little and not too much, abstracting away from the intricacies of the internals of the block so the designer only needs to focus on the elements needed to interact with other components. Jeannette Wing characterizes these kinds of skills as computational thinking, a concept that can be appointed in many situations not just programming.

Lovelace herself identified the wider power of abstraction and wrote of her ambition to understand the nervous system through developing “a law, or laws, for the mutual actions of the molecules of the brain.” And computer scientists today are indeed extending their techniques to develop suitable abstractions for this purpose. 

amanda_gefter's picture
Science writer; Author, Trespassing on Einstein's Lawn

What exactly do brains do? The usual answer is that they form mental representations of the world on the far side of the skull. Brains, that is, create internal virtual worlds—their best or most useful simulations of the real external world, one which exists independently of any of them but within which they all reside.

The problem, however, is that fundamental physics denies the existence of this observer-independent world. From quantum physics in the early 20th century to the black hole firewall debate that rages today, physicists have found that we tangle ourselves in paradox and violate laws of physics when we attempt to compile multiple viewpoints into a single spacetime. The state of a physical system, we’ve learned, can only be defined relative to a given observer. (Here “observer” does not mean consciousness, but a physical system capable of acting as a measuring device—yet one that itself must enjoy only a relational existence.) Slices of spacetime accessed by different observers cannot be considered broken shards of a single, shared world, but rather as self-contained and incommensurable versions of reality, each a universe unto itself.

In other words, there’s no third-person view of the world. There is one world per observer, and no more than one at a time.

What happens, then, to the concept of representation? What is it that brains are doing if there is no observer-independent world out there for them to represent?

One possibility is that instead of representing the world, brains enact one.

The term “enactivism” was introduced by Francisco Varela and colleagues in the 1990s, but it’s taken on fresh significance in light of an emerging set of ideas at the forefront of cognitive science today, including embodied cognition, the Bayesian brain, active inference and the free energy principle—ideas that emphasize top-down, generative perception wherein observers actively shape the worlds they perceive. The old passive view of perception is giving way to an active one, just as the Newtonian observer was replaced by the participator of modern physics. Suddenly the words of physicist John Archibald Wheeler apply to cognitive science: “We used to think that the world exists out there, independent of us, we the observer safely hidden behind a one-foot thick slab of plate glass, not getting involved, only observing. However, we’ve concluded that that isn’t the way the world works. We have to smash the glass, reach in.”

According to enactivism, observer and world co-evolve, hoisting one another up by their bootstraps through reciprocal interaction. Perception and action are inextricably and cyclically linked: our perceptions guide our actions and our actions determine what we perceive. Their symmetry ensures an even more essential symmetry, that between observer and world. The observer’s actions form the world’s perceptions, and the world’s actions form the observer’s perceptions. Labels such as “observer” and “observed,” “inside” and “outside,” thus become profoundly interchangeable, removing any need to resort to mystical, magical or hopelessly vague talk about consciousness as something over and above the physical world.

What the enactivist perspective leaves us with, ontologically speaking, is an observer and world that exist relative to one another and not in any absolute way. The enacted world is not one that can be described from a third person perspective. It is observer-dependent, rendered in first person, no more than one at a time. Which, of course, is exactly the ontology prescribed by fundamental physics.

Does it really matter if cognitive science aligns with fundamental physics? For the everyday purposes of research and practice in neuroscience, representation works well enough, just as our belief in a single, shared physical world mostly suffices, whether we’re driving to work or launching rockets to the moon. It’s only when we push to the outermost edges of either discipline—when dealing with tiny distances or intense gravity in physics, or when asking about the fundamental nature of consciousness in cognitive science—that the cracks in the third person perspective begin to show and a more fundamental theory is needed.

Fundamental physics and cognitive science have long been embroiled in a perpetual game of chicken or egg: the brain arises from the physical world and yet everything we know of the physical world we know only through the brain. Each serves as foundation for the other. So long as we restrict our inquiry to one at a time, something critical is left unexplained. If we want to understand reality as a whole, we need to understand both sides of the coin and how they are fused together. Physics finds that the world is observer-dependent but remains silent on the nature of the observer. Cognitive science finds that the observer is an active participant but never questions the nature of the physical world. Enactivism might just be the concept we need to begin to piece the two sides together. 

nancy_etcoff's picture
Psychologist and faculty member of the Harvard Medical School and of Harvard University

Humans and other animals fall for hyperbole. Exaggeration is persuasive; subtlety exists in its shadows. In a famous set of studies done in the 1950s, biologist and ornithologist Niko Tinbergen created “supernormal stimuli,” simulacra of beaks and eggs and other biologically salient objects, that were painted, primped and blown up in size. In these studies herring gull chicks pecked more at big red knitting needles than at adult herring gull beaks, presumably because they were redder and longer than the actual beaks. Plovers responded more to eggs with striking visual contrast (black spots on white surround) than to natural but drabber eggs with dark brown spots on light brown surround. Oystercatchers were willing to roll huge eggs into their nests to incubate. Later studies, as well as recording in the wild show supernormal stimuli hijacking a range of biologically driven responses. For example, female stickleback fish get swollen bellies when they are ripe with eggs. When Tinbergen’s student, Richard Dawkins made the dummy rounder and more pear shaped greater lust was inspired. He called these dummies “sex bombs.” Outside of the lab, male Australian jewel beetles have been recorded trying to perform sex with beer bottles made of shiny brown glass whose light reflections resemble the shape and color of female beetles.

Research on the evolution of signaling shows that animals frequently alter or exaggerate features to attract, mimic, intimidate, or protect themselves from conspecifics, sometimes setting off an arms race between deception and the detection of such deception. But it is only humans who engage in conscious manipulation of signals using cultural tools in real time rather than relying on slow genetic changes over evolutionary time. We live in Tinbergen’s world now, surrounded by supernormal signals produced by increasingly sophisticated cultural tools. We need only compare photoshopped images to the un-retouched originals, or compare, as my own studies have done, the perceptions of the same face with and without cosmetics to see that relatively simple artificially created exaggerations can be quite effective in eliciting heightened positive responses that may be consequential. In my studies the makeup merely exaggerated the contrast between the woman’s features and the surrounding skin.

How do such signals get the brain’s attention? Studies of the brain's reward pathways suggest that dopamine plays a fundamental role in encouraging basic biological behaviors that evolved in the service of natural rewards. Dopamine is involved in learning, and responds to cues in the environment that suggest potential gains and losses. In the early studies of the 1950s, before the role of dopamine was known, scientists likened the effects of supernormal stimuli to addiction, a process we now know is mediated by dopamine.

Are superstimuli leading to behavioral addictions? At the least, we can say that they often waste time and resources with false promises. We fall down rabbit holes where we pursue information we don’t need, or buy more products that seem exciting but offer little of real value or gain. Less obviously, they can have negative effects on our responses to natural stimuli, to nutritious foods rather than fast foods, to ordinary looking people rather than photoshopped models, to the slow pleasures of novel and nonfiction reading rather than games and entertainment, to the examined life rather than the unexamined and frenetic one.

Perhaps we can move away from the pursuit of “supernormal” to at least sometimes considering the “subtle” and the “fine,” to close examination and deeper appreciation of the beauties and benefits that lie hidden in the ordinary.

sarah_demers's picture
Horace D. Taft Associate Professor of Physics, Yale University

When we measure a value that is consistent with the prediction our tendency is to trust the result.  When we get an unexpected answer we apply greater scrutiny, trying to determine if we’ve made an error before believing the surprise. This tilts us toward revealing errors in one category of measurements and leaving them unexposed in another. It is particularly dangerous when our assumptions are flawed—and if there is anything we should bet on, given the history of progress within physics, it’s that our underlying assumptions are in some way flawed. Bias can creep into the scientific process in predictable and unpredictable ways.

Blind analyses are employed as a protection against bias. The idea is to fully establish procedures for a measurement before we look at the data so we can’t be swayed by intermediate results. They require rigorous tests along the way to convince ourselves that the procedures we develop are robust and that we understand our equipment and techniques. We can’t “unsee” the data once we’ve taken a look.

There are options when it comes to performing a blind analysis.  If you are measuring a particular number, you can apply a random offset to the number that is stored but not revealed to the analyzers. You complete the full analysis and reveal the offset and true result only when the work is done. Another method is to designate a sensitive segment of the data, the “signal,” as off limits. You don’t look at the signal until you’re convinced that you understand the remaining data, the “background.”  You can fully develop your analysis using the background and a simulated fake signal. Only when the analysis is fully developed do you look at the signal and obtain the result, a process known as “opening the box.” Another flavor of blind analysis was employed by the LIGO experiment in the discovery of gravitational waves. Fake signals were periodically inserted into the data so that full analyses were undertaken without analyzers knowing if the signals they were seeing were real. They carried the analyses all the way to the point of preparing the corresponding publication before they learned if they were analyzing real or fake data. 

Blind analyses force scientists to approach their work with humility, acknowledging the potential for bias to influence the process. They require creativity and rigor as we establish an understanding of the data without direct access to it. They enforce good stewardship of the data, which can represent significant investments into experiments that are not easily repeatable. They highlight the mystery and anticipation inherent in the discovery process where opening the box has the potential to reveal a surprise. Humility, rigor, stewardship, and mystery are the essential ingredients of blind analyses and represent the best that science has to offer.    

aubrey_de_grey's picture
Gerontologist; Chief Science Officer, SENS Foundation; Author, Ending Aging

Many years ago, Francis Crick promoted (attributing it to his long-time collaborator Leslie Orgel) an aphorism that dominates the thinking of most biologists: "Evolution is cleverer than you are." This is often viewed as a more succinct version of Theodosius Dobzhansky's famous dictum: "Nothing in biology makes sense except in the context of evolution." But these two observations, at least in the terms in which they are usually interpreted, are not so synonymous as they first appear.

Most of the difference between them comes down to the concept of maladaptation. A maladaptive trait is one that persists in a population in spite of inflicting a negative influence on the ability of individuals to pass on their genes. Orgel's rule, extrapolated to its logical conclusion that evolution is pretty much infinitely clever, would seem to imply that this can never occur: evolution will always find a way to maximize the evolutionary fitness of a population. It may take time to respond to changed circumstances, yes, but it will not stabilize in an imperfect state. And yet, there are many examples where that is what seems to have occurred. In human health, arguably the most conspicuous case is that the capacity to regenerate wounded tissues is lost in adulthood (sometimes even earlier), even though more primitive vertebrates (and, to a lesser extent, even some other mammals) retain it throughout life.

This defiance of Orgel's rule is not, however, in conflict with Dobzhansky's. That's because of the phenomenon of pleiotropy, or trade-offs. Sometimes, the advantage gained by optimizing one aspect of fitness is outweighed by some downside that results from the same genetic machinery. The stable state to which the species thus gravitates is then a happy—but not perfectly happy—medium between the two extremes that would optimize the corresponding aspects.

Why is this so important to keep in mind? Many reasons, but in particular it's because when we get this wrong, we can end up making very bad evaluations of the most promising way to improve our health with new medicines. Today, the overwhelming majority of ill-health in the industrialized world consists of the diseases of late life, and we spend billions of dollars in the attempt to alleviate them—but our hit rate in developing even very modestly effective interventions has remained pitifully low for decades. Why? It's largely because the diseases of old age, being by definition slowly-progressing chronic conditions, are already being fought by the body to the best of its (evolved) ability throughout life, so that any simplistic attempt to augment those pre-existing defenses is awfully likely to do more harm than good. The example I gave above, of declining regenerative capacity, is a fine example: the body needs to trade better regeneration against preventing cancer, so we will gain nothing by an intervention that merely pushes that trade-off away from its evolved optimum.

Why should it be, though, that evolution accepts these trade-offs? In reality, it doesn't: it is always looking for ways to get closer to the best of both worlds. But that, too, must be considered in the context of how evolution actually works. Some adaptations, even if they may theoretically be possible, just take too long for evolution to find, so what we see is the best that evolution could manage in the time it had. It looks like stability—as if evolution has decided that a particular trade-off is good enough—but that's really just an approximation.

So in summary: follow Orgel when you're coming up with new ideas, but follow Dobzhansky when you're engaging in the essential rigorous evaluation of those ideas.

david_dalrymple's picture
Research affiliate, MIT Media Lab

Given the operation of squaring a number, there are two numbers that are special because the operation doesn't change them: 0 and 1. 0 squared is 0, and 1 squared is 1, but square any other number and the output will differ from the input. These special numbers are called fixpoints (of squaring). In general, a fixpoint is a value (or state of a system) that is left unchanged by a particular operation. This concept, easily definable by the brief equation x=f(x), is at the essence of several other ideas with great practical significance, from Nash equilibria (used in economics and social sciences, to model failures of cooperation) to stability (used in control theory to model systems ranging from aircraft to chemical plants) to PageRank (the foremost web search algorithm). It even appears in work at the foundations of logic, to give definition to truth itself!

A prevalent example in everyday life is the occurrence of fixpoints in any strategic context ("game") with multiple players. Suppose each player can independently revise their strategy so as to best respond to the others' strategies. If we consider this revision process as an operation, then its fixpoints are those combinations of strategies for which the revision operation yields exactly the same combination of strategies: that is, no player is incentivized to change their behavior in any way. This is the concept of a Nash equilibrium, a kind of fixpoint that directly applies to many real-world situations—ranging in importance from roommates leaving dirty dishes in the sink to nuclear arsenals. Moving to a different Nash equilibrium (such as disarmament) requires changing the revision operator (e.g., with an agreement that binds multiple players to change their strategies at the same time).

Fixpoints are also an extraordinarily useful framework to consider ideas that seem to be defined circularly. As a concrete example, the golden ratio phi can be defined as phi=1+1/phi, which actually means phi is the unique fixpoint of the operation that takes the reciprocal then adds one. In general, self-referential definitions can be classified according to the number of fixpoints of the corresponding operation: more than one, and it's imprecise but potentially serviceable; one, and it's a bona fide definition; lacking any, and it's a paradox (like "this sentence is false"). In fact, it is operations that lack fixpoints which underly a whole host of deeply-related paradoxes in mathematics, and led to Gödel's discovery that the early-20th-century logicians' quest to completely formalize mathematics is impossible.

Mathematics is not the only reasoning system we might use; in daily life we use ideologies and belief systems that aren't complete—in the sense that they can't express arbitrarily large natural numbers (nor do we care for them to). We can consider persistent ideologies and belief systems as fixpoints of the revision operation of "changing one's mind"—subject to the types of questions, evidence, and methods of reasoning that the ideology judges as acceptable. If there's a fixpoint here, we're unable to change our minds in some cases, even if presented with overwhelming evidence. As with Nash equilibria, the key to escaping such a belief system is to modify the revision operation: considering just one new question (such as "what questions am I allowed to ask?") can sometimes be enough to abolish ideological fixpoints forever. 

One remarkable feature of the scientific belief system is its non-fixedness: new beliefs are constantly integrated, and old beliefs are not uncommonly discarded. Ideally, science would be complete in the limit of infinite time and experiments without losing its openness, analogous to how programs that produce never-ending lists (such as the digits of pi or the prime numbers) are formally given meaning by infinitely large fixpoints which may only be successively approximated. While an absence of fixpoints in the logical foundations of arithmetic dooms it to "incompleteness," fixpoint theorems have recently been used to show that if we relax our notion of completeness to the almost-equally-satisfying concept of "coherence," there is a revision operation which is guaranteed to have a coherent fixpoint, and can even be approximated by computable algorithms!

Perhaps science, too, aspires to an unreachable, infinite fixpoint in which all knowable facts are known and all provable consequences are proven, such that there would be no more room to change one's mind—and we hope that with each passing year, our current state of knowledge more closely approximates that ultimate fixpoint.

gregory_cochran's picture
Consultant; Adaptive Optics and Adjunct Professor of Anthropology, University of Utah; Coauthor (with Henry Harpending), The 10,000 Year Explosion

R = h2 S.

R is the response to selection, S is the selection differential, and h2 is the narrow-sense heritability. This is the workhorse equation for quantitative genetics. The selective differential S, is the difference between the population average and the average of the parental population (some subset of the total population). Almost everything is moderately to highly heritable, from height and weight to psychological traits.

Consider IQ. Imagine a set of parents with IQs of 120, drawn from a population with an average IQ of 100. Suppose that the narrow-sense heritability of IQ (in that population, in that environment) is 0.5. The average IQ of their children will be 110. That’s what is usually called regression to the mean.

Do the same thing with a population whose average IQ is 85. We again choose parents with IQs of 120, and the narrow-sense heritability is still 0.5. The average IQ of their children will be 102.5—they regress to a lower mean.

You can think of it this way. In the first case, the parents have 20 extra IQ points. On average, 50% of those points are due to additive genetic factors, while the other 50% is the product of good environmental luck. By the way, when we say "environmental,” we mean “something other than additive genetics.” It doesn’t look as if the usual suspects—the way in which you raise your kids, or the school they attend—contribute much to this "environmental" variance, at least for adult IQ. We know what it’s not, but not much about what it is, although it must include factors like test error and being hit on the head with a ball-peen hammer.

The kids get the good additive genes, but have average "environmental" luck—so their average IQ is 110. The luck (10 pts worth) goes away.

The 120-IQ parents drawn from the IQ-85 population have 35 extra IQ points, half from good additive genes and half from good environmental luck. But in the next generation, the luck goes away… so they drop 17.5 points.

The next point is that the luck only goes away once. If you took those kids from the first group, with average IQs of 110, and dropped them on a friendly uninhabited island, they would eventually get around to mating—and the next generation would also have an IQ of 110. With tougher selection, say by kidnapping a year’s worth of National Merit Finalists, you could create a new ethny with far higher average intelligence than any existing. Eugenics is not only possible, it’s trivial.

So what can you explain with the breeder’s equation? Natural selection, for one thing. It probably explains the Ashkenazi Jews—it looks as if there was (once) an unusual reproductive advantage for people who were good at certain kinds of white collar jobs, along with a high degree of reproductive isolation.

It also explains why the professors’ kids are a disproportionate fraction of the National Merit Finalists in a college town—their folks, particularly their fathers, are smarter than average—and so are they. Reminds me of the fact that Los Alamos High School has the highest test scores in New Mexico. Our local high school tried copying their schedule, in search of the secret. Didn’t work. I know of an approach that would, but it takes about fifteen years.

But those kids, although smarter than average, usually aren’t as smart as their fathers: partly because their mothers typically aren’t theoretical physicists, partly because of regression towards the mean. The luck goes away.

That's the reason why the next generation has trouble running the corporation Daddy founded: regression to the mean, not just in IQ. Dynasties have a similar decay problem: the Ottoman Turks avoided it for a number of generations, by a form of delayed embryo screening (the law of fratricide).

And of course the breeder’s equation explains how average IQ potential is declining today, because of low fertility among highly educated women.

Let me make this clear: the breeder’s equation is immensely useful in understanding evolution, history, contemporary society, and your own family.

And hardly anyone has heard of it. “breeder’s equation” has not been used by the New York Times in the last 160 years. 

ashvin_chhabra's picture
Investor, Physicist, Author: The Aspirational Investor

The word scale, when it refers to an object, may refer to a simple ruler. However, the act of measurement is the start of a deep relationship between geometry, physics and many important endeavors of humanity.

Scaling, in the geometrical context, is the act of resizing an object while preserving certain essential characteristics such as its shape. It indicates a relationship that is robust under certain transformations.

One uncovers scaling relationships by measuring and plotting key variables against each other. The simplest scaling relation is a linear one—yielding a straight line on a X-Y plot—denoting proportionality. The number of miles you can travel before refueling scales roughly linearly with the amount of fuel left in your gas tank. Twice the gas—double the miles that can be driven!

More complicated scaling functions include power-laws, exponentials and so on each of which are reflective of the underlying spatial geometry or dynamics of the system.

The circumference of a sphere scales proportionally with its radius. The surface area and volume scale, on the other hand, as the square and cube of the radius respectively. These power-law relationships show up as straight lines on a log-log plot. The slopes reflect the dimension of the object being measured demonstrating that length is one-dimensional, area is two-dimensional and volume is three-dimensional. This is true regardless of the shape of these solids—spherical, cubical and so on - and so plots using other solids will show the same robust scaling relationships and scaling exponents.

But sometimes length does not scale with an exponent of one—what then?

Richardson’s pioneering measurements of the length of coastlines across the world showed that measuring with increasing precision gave ever-increasing answers for their lengths. The various data sets (measurements with different precision) for each coastline scaled using an exponent greater than one.

Benoit Mandelbrot, in his classic paper “How long is the coast of Britain?” provided the insight needed to understand this result. As one measures the coastline with ever-increasing precision, one measures and adds the lengths of ever-smaller fjords, nooks and crannies. The coastline is not a smooth object; over a certain range of length-scales it is rough and undulating enough that it behaves like a self-similar object with a dimension between one and two. This insight led to the development of fractal geometry—the geometry of irregular, fractured objects: coastlines, mountains…broccoli. Systems that possess the property of self-similarity often show power-law scaling—alluding to a beautiful link between geometry and physics. Incorporating fractal geometry also turns out to be important for computer algorithms to generate the realistic-looking artificial worlds in movies and video games that we now take for granted.

More generally, a variety of natural phenomena appear scale-invariant (or look the same) when an important underlying physical parameter is changed (or rescaled) in a specific way. This physical parameter in many cases is a carefully constructed dimensionless quantity e.g. the ratio of two key length scales in the system being studied. The equations describing the phenomena must then obey and thus reflect the observed scale invariance.

Richardson’s delightful ditty captures the scale invariance of the Navier-Stokes equations of fluid turbulence—which remain unsolved to this day.

Big whirls have little whirls that feed on their velocity,
and little whirls have lesser whirls and so on to viscosity.

The laws of physics embody scaling. Newton’s law of gravitation states that the attractive gravitational force between two bodies scales linearly with each of their masses, and inversely with the square of the distance between them.

If you get large enough or small enough, almost any scaling law observed in the real world breaks down. This makes us think about the range of applicability and what causes the break down. The scaling relations implied by Newton’s laws break down at very small distances—giving way to quantum mechanics. The same happens at very large velocities - giving way to Einstein’s theory of relativity.

Thus it is often useful to refer to a scale to specify the range over which a theory or observed phenomenon is applicable: one refers to the quantum or sub-atomic scale, the human scale, the astronomical scale and so on.

Scaling concepts have found broad applicability in unexpected areas such as the study of social networks.

Technology companies are often not constrained by geography and can easily scale up their user base. Often early stage technology startups show exponential increases in usage that are then reflected in exploding valuations. Supply-demand problems that require resources to scale exponentially are simply not sustainable. As companies mature the exponential scaling and valuation must level off. On the other hand, power-law relationships and solutions indicate a business that may be scalable for a sustained period and are sought after by investors with a longer time horizon.

During the first internet bubble (circa 2000), companies rushed to get users and valuations were based on the number of people using their web-sites.

Today, social network companies value users on the strength and quality of their interactions with other users. One can argue that valuations of these companies should scale quadratically with the number of users. It remains to be seen if such scaling arguments used to justify higher valuations hold up, compared to more traditional measures such as revenue and profitability of a company.

Scaling relationships inevitably break down beyond a certain range, providing an important clue that other effects either ignored or not yet considered are now important. One should not ignore these deviations. While solving for persistent social problems that appear at all scales one should remember—what works for a family may not work for a business. What works for a business may not work for a nation.

giulio_boccaletti's picture
Chief Strategy Officer of The Nature Conservancy; Author, Water: A Biography

The notion of a “climate system” is the powerful idea that the temperature we feel when we walk outside our door every day of the year, that the wind blowing on our faces while taking a walk, that the clouds we see in the sky, that the waves we watch rippling on the surface of the ocean as we walk along the beach, are all part of the same coherent, interconnected planetary system, governed by a small number of knowable, deterministic physical laws.

The first explicit realization that planetary coherence is an attribute of many environmental phenomena we experience in our life probably coincided with the great explorations of the 16th and 17th century, when Halley—of comet fame—first postulated the existence of a general circulation of the atmosphere, from equator to poles, in response to differential heating of the planet by the sun. The reliability of the easterly winds, which ensured safe sailing westward on the great trade routes of the Atlantic, was a telling clue that something must have been explainable. That this must have had something to do with the shape of the earth and its rotation required the genius of Hadley, who—even without the necessary mathematics, only fully available over a century later—was the first to realize just how powerful a constraint rotation could be on the fluid dynamics of the planet.

But even without such explanations, a sense that there is such a thing as a coherent “climate,” in which, for example, weather appears to exhibit coherence over time and space, has followed us through history. After all, the saying “Red sun at night, sailor’s delight; red sun at morning, sailors take warning” must have captured some degree of coherent predictability to survive for centuries! Today we can explain that very rhyme as predicting mid-latitude weather, and we can quantitatively describe it with a particular solution to the simplified Navier-Stokes equations called a Rossby wave, after Carl-Gustav Rossby, the founder of modern meteorology.

We are so used to the coherent workings of the climate system that most people don’t even think of it as a scientific construct worth knowing. We take for granted—in fact we think it trivial—that in the mid-latitudes many of us enjoy summer, then autumn, followed by winter, then spring and then summer again, in a predictable sinusoidal sequence of temperature, wind, and rain or snow. That is just our “climate,” the sort of thing we read in the first page of a tourist guide to a new country we might visit. Yet we explain such climate with the impact on insolation of the tilt of the planet and its revolution around the sun, and that same simple solar cycle also produces two rainy seasons in some parts of the tropics, and highly predictable yearly monsoons in others. In fact, that the seasonal cycle could be such a complex and varied response to such a simple and predictable forcing is nothing short of astounding.

Even more striking should be the fact that our scientific understanding of fluid motion on that scale has enabled computational models of the atmosphere—the brainchild, at least in theory, of Lewis Fry Richardson, and, in practice, of John von Neumann and Jules Charney, based on relatively simple numerical version of the Navier-Stokes, continuity and radiative transfer equations—to simulate all those phenomena on a planetary scale to such precision that the untrained eye would struggle to tell the difference from the real thing. That the winds we hear blowing through the trees of our gardens should be part of a coherent system spanning thousands of miles across the planet, from the equator to the pole, responding to astronomical forcing, and that we can explain it quantitatively with equations derived from physical first principles and, within certain limits, predict its behavior should be awestriking. 

Few know that the coherence of the climate system has also extraordinary function. For example, coherent movement on a planetary scale by both the ocean and the atmosphere ensures that heat is transported poleward from the equator at peak rate of almost 6 PW (or six million billion W—roughly a thousand times all installed power production capacity in the world), thus ensuring that when people say they are boiling in Nairobi or freezing in Berlin, they can be taken figuratively rather than literally.

The climate system is also a source of great wonder. It is astonishing, and a subject of a great deal of active research, that almost imperceptible differences in those same solar forcing conditions—the so-called Milankovitch cycle, tiny adjustments to the amount of sunlight reaching the earth because of small periodic changes in the shape and structure of the orbit of the planet around the sun—could result in ice ages, a climate response of such incredible force that can make the difference between several hundred meters of ice over Chicago and what we have today. And we know that its deterministic coherence does not necessarily imply simplicity: it is capable of generating its own resonant modes—such as those associated with El Nino and La Nina, phenomena that require the coupled interaction of the ocean and the atmosphere—as well as chaotic dynamics.

That all of this we could understand by measuring and studying our planet over thousands of measurements from meteorological stations, satellites, buoys, and cores, and that we could explain it by using the fundamental laws of physics is nothing short of miraculous and one of the great accomplishments of modern science. And it is all the more important because we spend our lives in it every day. The planetary state of the climate system is what determines how much water we might have available at any given time, what kind of crops we might be able to grow, which parts of the world will flood and which ones will be parched to death. It matters, a great deal, to all.

nicolas_baumard's picture
Research Scientist at the CNRS, Paris and Associate Professor at ENS, Paris, France

Humans all over the world share the same genome, the same neural architecture and the same behavioral niche (three-generational system of resource provisioning, long-term pair-bonding between men and women, high levels of cooperation between kin and non-kin). At the same time, human cultures are highly variable. Some societies see revenge as a duty, others as a sin; some regard sex as a pleasure, others as a danger, and some reward innovations while others prefer traditions. In many instances, these cultural differences are robust, and can last for millennia despite cultural contacts, political assimilation or linguistic replacement.

How can we account for such variability? Traditionally, it is assumed that cultural variability cannot be explained by species-specific evolved mechanisms and that it must be the product of socially transmitted norms in the forms of religious beliefs, informal enforcement or political conquest.

This assumption is based on a common misconception about natural selection, which is wrongly thought to select mechanisms that systematically produce universal, uniform and unchanging behaviors. But all evolved mechanisms, physiological or psychological, actually come with a certain level of flexibility in response to local contexts. This is called phenotypic plasticity. The genotype codes for a mechanism that is able to express different phenotypes (organs, behaviors) in response to detectable and recurring changes in the environment.

Tanning is a case in point. While the mechanism of skin pigmentation is a universal adaptation to protect human cells from ultraviolet damage and to synthesize vitamin D, it responds differently to different contexts, making skin darker in low latitudes (and in the summer) and lighter in high latitudes (and in the winter).

Phenotypic plasticity has been shown to be evolutionary advantageous when there are different optimal phenotypes in different environments. This is of course the case for tanning: the optimal level of tanning differs whether an individual lives in high or low latitudes and whether it is winter or summer. Natural selection must therefore be able to maintain the right skin color despite variations in the environment. One solution is to select for a certain level of plasticity (skin pigmentation is not completely plastic) and to build a mechanism that can detect the amount of light, and adjust the level of melatonin accordingly.

The study of phenotypic plasticity has long been limited to physical traits (such as skin pigmentation). However, recent research in neuroscience, ecology and psychology has shown that phenotypic plasticity extends to behaviors. For instance, in a harsh and unpredictable environment where the future is dreary, organisms tend to adopt a short-term life strategy: maturing and reproducing earlier, investing less in offspring and in pair-bonding, being more impulsive. On the contrary, in a more favorable and predictable environment, organisms switch to a long-term strategy: maturing and reproducing later, investing in offspring and in pair-bonding, and being more patient. Importantly, these switches between present-oriented and future-oriented behaviors can affect all kind of behaviors: reproduction and growth of course, but also attitudes toward consumption, investment in learning or health, trust in others, political opinions, technological innovation, etc. In fact, every behavior (and neural structure) for which time and risk are relevant dimensions is likely to involve a certain degree of plasticity.

Could phenotypic plasticity be relevant to explain cultural differences?

When we observe cultural differences between two societies, we can’t help but think that the difference has its roots in different cultural heritages (religious, legal, literary, etc.). This is because we have no alternative mechanism to explain such differences. Suggesting that the two groups differ in terms of psychological mechanisms seems to mean going back to the 19th century "national character" studies or to mysterious "cultural mindset" that re-describe the phenomenon rather than explain it. By contrast, phenotypic plasticity offers a plausible (and not mutually incompatible) mechanism to explain why people have different mindsets in different societies, why they are more impulsive, why they trust others less or why they are afraid of innovations. Even better, it makes some predictions as to how the environment should impact people’s psychology. For instance, a common sense idea is that people should innovate when they are in danger and need an urgent solution. Evolutionary theory suggests the opposite: when resources are scarce and unpredictable, innovation is too risky: Better stick to what you know than jeopardize everything. And indeed, this is what people do.

Finally, phenotypic plasticity has the potential to solve several limitations of the standard culture-as-transmission-of-information paradigm: Why does the same cultural background (language, religion, ethnicity) give rise to radically different behaviors according to whether an individual was born in a low vs. a high social class or in an old vs. recent generation? Or how does an old and apparently robust cultural phenomenon crumble in a few generations, sometimes in a few years, and without any cultural external input? This may be because different environments triggered different "behavioral strategies" in people, transforming a common cultural heritage into diverging new cultures.

To sum up, phenotypic plasticity is key in the study of human behavior. It provides a framework to account for the fact that the same genome and the same neural architecture can give rise to cultural variability in humans.

ross_anderson's picture
Professor of Security Engineering at Cambridge University

We keep hearing about Big Data as the latest magic solution for all society’s ills. The sensors that surround us collect ever more data on everything we do; companies use it to work out what we want and sell it to us. But how do we avoid a future where the secret police know everything?

We’re often told our privacy will be safe, because our data will be made anonymous. But Dorothy Denning and other computer scientists discovered in about 1980 that anonymization doesn’t work very well. Even if you write software that will only answer a query if the answer is based on the data of six or more people, there’s a lot of ways to cheat it. Suppose university professors’ salaries are confidential, but statistical data are published, and suppose that one of the seven computer science professors is a woman. Then I just need to ask “Average salary computer science professors?” and “Average salary male computer science professors?” And given access to a database of "anonymous" medical records, I can query the database before and after the person I’m investigating visits their doctor and look at what changed. There are many ways to draw inferences.

For about ten years now, we’ve had a decent theoretical model of this. Cynthia Dwork’s work on differential privacy established bounds on how many queries a database can safely answer, even if it’s allowed to add some noise and permitted a small probability of failure. In the best general case, the bound is of the order of N2 where there are N attributes. So if your medical record has about a hundred pieces of information about you, then it’s impractical to build an anonymized medical record system that will answer more than about ten thousand queries before a smart interrogator will be able to learn something useful about someone. Common large-scale systems, which can handle more than that many queries an hour, simply cannot be made secure—except in a handful of special cases.

One such case may be where your navigation app uses the locations of millions of cell phones to work out how fast the traffic is moving where. But even this is hard to do right. You have to let programmers link up some successive sightings of each phone within each segment of road to get average speeds, but if they can link between segments they might be able to reconstruct all the journeys that any phone user ever made. To use anonymization effectively—in the few cases where it can work—you need smart engineers who understand inference control, and who also have the incentive to do the job properly. Both understanding and incentive are usually lacking.

A series of high-profile data scandals has hammered home the surprising power de-anonymization. And it’s getting more powerful all the time, as we get ever more social data and other contextual data online. Better machine-learning algorithms help too; they have recently been used, for example, to de-anonymize millions of mobile phone call data records by pattern-matching them against the public friendship graph visible in online social media. So where do we stand now?

In fact, it’s rather reminiscent of the climate change debate. Just as Big Oil lobbied for years to undermine the science of global warming, so also Big Data firms have a powerful incentive to pretend that anonymization works, or at least will work in the future. When people complain of some data grab, we’re told that research on differential privacy is throwing up lots of interesting results and our data will be protected better real soon now. (Never mind that differential privacy teaches exactly the reverse, namely that such protection is usually impossible.) And many of the people who earn their living from personal data follow suit. It’s an old problem: it’s hard to get anyone to understand anything if his job depends on not understanding it.

In any case, the world of advertising pushes towards ever more personalisation. Knowing that people in Acacia Avenue are more likely to buy big cars, and that forty-three year olds are too, is of almost no value compared with knowing that the forty-three year old who lives in Acacia Avenue is looking to buy a new car right now. Knowing how much he’s able to spend opens the door to ever more price discrimination, which although unfair is both economically efficient and profitable. We know of no technological silver bullet, no way to engineer an equilibrium between surveillance and privacy; boundaries will have to be set by other means.

raphael_bousso's picture
Professor, Berkeley Center for Theoretical Physics, UC Berkeley

A hundred years ago, in 1917, Albert Einstein had a problem. He had just come up with a beautiful new theory of gravity called general relativity. But the theory predicted that the universe should either expand or contract. Even for Einstein, this was a bridge too far. The universe, though it might have had a beginning, certainly did not appear to be changing.

General relativity is a rigid theory; little of it can be changed without destroying its elegant mathematical structure. The only wiggle room was a single quantity, which Einstein called the cosmological constant. If he assumed this quantity was zero, the equations looked particularly simple, but they required the universe to be dynamical.

The gravity of ordinary matter (such as galaxies) is attractive. Introducing a positive cosmological constant adds a repulsive counterforce. By setting the cosmological constant to a particular nonzero value at which the two tendencies cancel each other, Einstein thought he could get his theory to spit out a static, unchanging universe.

Einstein later called this idea his “biggest blunder,” an accurate assessment. First, his move doesn’t actually accomplish the task: Einstein’s static universe is unstable, like a pencil balanced on its tip. Because matter is distributed unevenly, the opposing forces couldn’t possibly be arranged to balance out everywhere. Individual regions would soon begin to expand or contract. Worse, Einstein missed out on a spectacular prediction. Had he believed his own equations, he could have anticipated the 1929 discovery that galaxies are in fact receding from one another.

This dramatic turn of events proved that the universe was not static. But an expanding universe didn’t imply that the cosmological constant was necessarily zero! As quantum field theory triumphed in the second half of the 20th century as a description of elementary particles, physicists recognized that the cosmological constant “wants to be there.” The very theories predicting with unprecedented accuracy the behavior of small particles also implied that empty space should have some weight, or “vacuum energy.” This kind of energy happens to be indistinguishable from a cosmological constant, as far as the equations of general relativity are concerned.

So it was no longer an option to set the cosmological constant to zero. Rather, it had to be calculated. Estimates indicated an enormous value—save for some unlikely precise cancellation between large positive and negative contributions from different particles. But a huge cosmological constant would show up as a repulsive force that would blow up the entire universe in a split second (or, alternatively, would cause it to collapse instantly, if the constant happened to come out negative). Evidently this was not what the universe was doing.

This drastic conflict between theory and observation is the “cosmological constant problem.” It remains the most serious problem in theoretical physics. It has contributed to the development of revolutionary ideas, particularly the multiverse and the “landscape” of string theory. 

The string landscape solves the cosmological constant problem by a strategy similar to throwing many darts randomly: Some will hit the bullseye by accident. In string theory there are many different ways of making empty space, and they would all get realized as vast regions in different parts of the universe. In some regions, the universe “hits the bullseye”—the cosmological constant is accidentally small. There, spacetime does not quickly explode or collapse, so structure and observers are more likely to evolve in these lucky regions.

A crucial prediction of this approach was pointed out by Steven Weinberg in 1987: If an accidental near-cancellation is the reason the cosmological constant is so small, then there’s no particular reason for it to be exactly zero. Rather, it should be large enough to have a just-noticeable effect.

In 1998, astronomers did find such an effect. By observing distant supernovae, we can tell that the universe is not just expanding but accelerating (expanding ever more rapidly). Other observations, such as the history of galaxy formation and the present rate of expansion, have since provided independent evidence for the same conclusion. Empty space is filled with vacuum energy. In other words, there’s a positive cosmological constant of a particular value, which we have now measured.

The cause of the acceleration is sometimes described more dramatically as a “mysterious dark energy.” But in science we shouldn’t embrace mystery where there is none. If it walks like a duck and quacks like a duck, we call it a duck. In this case, it accelerates the expansion and affects galaxy formation precisely like a cosmological constant, so we should call it by its name.

One of the most fascinating consequences of a positive cosmological constant is that we’ll never see much more than the present visible universe. In fact, billions of years from today the most distant galaxies will begin to disappear, accelerated out of sight too far for light from them to reach us. Eventually our local group of galaxies will hover alone in a vast emptiness filled with nothing but vacuum energy.

victoria_stodden's picture
Associate Professor of Information Sciences, University of Illinois at Urbana-Champaign

In statistical modeling the use of the Greek letter “epsilon” explicitly recognizes that uncertainty is intrinsic to our world. The statistical paradigm envisions two components: data or measurements drawn from the world we observe; and the underlying processes that generated these observed data. Epsilon appears in mathematical descriptions of these underlying processes and represents the inherent randomness with which the data we observe are generated. Through the collection and modeling of data we hope to make better guesses at the mathematical form of these underlying processes, with the idea that a better understanding of the data generating mechanism will allow us to do a better job modeling and predicting the world around us.

That use of epsilon is a direct recognition of the inability of data-driven research to perfectly predict the future, no matter the computing or data collection resources. It codifies that uncertainty exists in the world itself. We may be able to understand the structure of this uncertainty better over time, but the statistical paradigm asserts uncertainty as fundamental.

So we can never expect perfect predictions, even if we manage to take perfect measurements. This inherent uncertainty means doubt isn't a negative, or a weakness, but a mature recognition that our knowledge is imperfect. The statistical paradigm is increasingly being used as we continue to collect and analyze vast amounts of data, and as the output of algorithms and models grows as a source of information. We are seeing the impact across society: evidence-based policy; evidence-based medicine; more sophisticated pricing and market prediction models; social media customized to our online browsing patterns... The intelligent use of the information derived from statistical models relies on understanding uncertainty, as does policy making and our cultural understanding of this information source. The 21st century is surely the century of data, and correctly understanding its use has high stakes.

oliver_scott_curry's picture
Senior Researcher, Director, The Oxford Morals Project, Institute of Cognitive and Evolutionary Anthropology, University of Oxford

Fallibilism is the idea that we can never be 100% certain that we are right, and must therefore always be open to the possibility that we are wrong. This might seem a pessimistic notion, but it is not. Ironically, this apparent weakness is a strength; for admitting one’s mistakes is the first step to learning from them, and overcoming them, in science and society.

Fallibilism lies at the heart of the scientific enterprise. Even science’s most well-established findings—the "laws" of nature—are but hypotheses that have withstood scrutiny and testing thus far. The possibility that they may be wrong, or be superseded, is what spurs the generation of new alternative hypotheses, and the search for further evidence that enables us to choose between them. This is what scientific progress is made of. Science rightly champions winning ideas, ideas that have been tested and have passed, and dispenses with those that have been tested and have failed, or are too vague to be tested at all.

Fallibilism is also the guiding principle of free, open, liberal, secular societies. If the "laws" of nature can be wrong, then think how much more fallible are our social and political arrangements. Even our morals, for example, do not reflect some absolute truth—god-given or otherwise. They too are hypotheses—biological and cultural attempts to solve the problems of cooperation and conflict inherent in human social life. They are tentative, provisional, and capable of improvement; and they can be, and have been, improved upon. The awareness of this possibility—allied to the ambition to seize the opportunity it represents, and the scientific ability to do so—is precisely what has driven the tremendous social, moral, legal and political progress of the past few centuries.

Fallibilism—the notion that we may not be right—does not mean that we must be entirely wrong. So it is not a license to tear everything up and start again. We should respect tried and tested ideas and institutions, and recognize that our attempts to improve on them are equally fallible. Nor does fallibilism lead to "anything goes" relativism—the notion that there is no way to distinguish good ideas from bad. On the contrary, fallibilism tells us that our methods for distinguishing better ideas from worse work, and urges us to use them to quantify our uncertainty, and work to resolve it. Better still, to work together. Your opponent is no doubt mistaken, but in all likelihood so are you; so why not see what you can learn from each other, and collaborate to see further?

Anyone wishing to understand the world, or change it for the better, should embrace this fundamental truth.

helena_cronin's picture
Co-director of LSE's Centre for Philosophy of Natural and Social Science; Author, The Ant and the Peacock: Altruism and Sexual Selection from Darwin to Today
Sex

The poet Philip Larkin famously proclaimed that sex began in 1963. He was inaccurate by 800 million years. Moreover, what began in the 1960s was instead a campaign to oust sex—in particular, sex differences—in favor of gender.

Why? Because biological differences were thought to spell genetic determinism, immutability, anti-feminism and, most egregiously, women's oppression. Gender, however, was the realm of societal forces; "male" and "female" were social constructs, the stuff of political struggle; so gender was safe sex.

The campaign triumphed. Sex now struggles to be heard over a clamor of misconceptions, fabrications and denunciations. And gender is ubiquitous, dominating thinking far beyond popular culture and spreading even to science—such that a respected neuroscience journal recently felt the need to devote an entire issue to urging that sex should be treated as a biological variable.

And, most profoundly, gender has distorted social policy. This is because the campaign has undergone baleful mission-creep. Its aim has morphed from ending discrimination against women into a deeply misguided quest for sameness of outcome for males and females in all fields—above all, 50:50 across the entire workplace. This stems from a fundamental error: the conflation of equality and sameness. And it's an error all too easily made if your starting point is that the sexes are "really" the same and that apparent differences are mere artifacts of sexist socialization.

Consider that 50:50 gender-equal workplace. A stirring call. But what will it look like? (These figures are UK; but ratios are almost identical in all advanced economies.) Nursing, for example, is currently 90% female. So 256,000 female nurses will have to move elsewhere. Fortunately, thanks to a concomitant male exodus, 570,000 more women will be needed in the construction and building trades. Fifteen thousand women window-cleaners. One hundred twenty-seven thousand women electricians. One hundred forty-three thousand women vehicle-mechanics. One hundred thirty-one thousand women metal-machinists. And 32,000 women telecom-engineers.

What's more, the most dangerous and dirty occupations are currently almost entirely 100% male—at least half a million jobs. So that will require a mass exodus of a quarter of a million women from further "unbalanced" occupations. Perhaps women teachers could become tomorrow's gender-equal refuse-collectors, quarry workers, roofers, water-and-sewage plant operators, scaffolders, stagers and riggers?

And perhaps gender-balanced pigs could fly? At this point, the question becomes: If that's the solution, what on earth was the problem? Gender proponents seem to be blithely unaware that, thanks to their conflation of equality and sameness, they are now answering an entirely different set of concerns—such as "diversity," "under-representation," "imbalance"—without asking what on earth they have to do with the original problem: discrimination.

And the confusions ramify. Bear in mind that equality is not sameness. Equality is about fair treatment, not about people or outcomes being identical; so fairness does not and should not require sameness. However, when sameness gets confused with equality—and equality is of course to do with fairness—then sameness ends up undeservedly sharing their moral high ground. And male/female discrepancies become a moral crusade. Why so few women CEOs or engineers? It becomes socially suspect to explain this as the result not of discrimination but of differential choice.

Well, it shouldn’t be suspect. Because the sexes do differ—and in ways that, on average, make a notable difference to their distribution in today's workplace.

So we need to talk about sex.

Here's why the sexes differ. A sexual organism must divide its total reproductive investment into two—competing for mates and caring for offspring. Almost from the dawn of sexual reproduction, one sex specialized slightly more in competing for mates and the other slightly more in caring for offspring. This was because only one sex was able to inherit the mitochondria (the powerhouse of cells); so that sex started out with sex cells larger and more resource-rich than the other sex. And thus began the great divide into fat, resource-laden eggs, already investing in "caring"—providing for offspring—and slim, streamlined sperm, already competing for that vital investment. Over evolutionary time, this divergence widened, proliferating and amplifying, in every sexually reproducing species that has ever existed. So the differences go far beyond reproductive plumbing. They are distinctive adaptations for the different life-strategies of competers and carers. Wherever ancestral males and females faced different adaptive problems, we should expect sex differences—encompassing bodies, brains and behaviour. And we should expect that, reflecting those differences, competers and carers will have correspondingly different life-priorities. And that's why, from that initial asymmetry, the same characteristic differences between males and females have evolved across all sexually-reproducing animals, differences that pervade what constitutes being male or female.

As for different outcomes in the workplace, the causes are above all different interests and temperaments (and not women being "less clever" than men). Women on average have a strong preference for working with people—hence the nurses and teachers; and, compared to men, they care more about family and relationships and have broader interests and priorities—hence little appeal in becoming CEOs. Men have far more interest in "things"—hence the engineers; and they are vastly more competitive: more risk-taking, ambitious, status-seeking, single-minded, opportunistic—hence the CEOs. So men and women have, on average, different conceptions of what constitutes success (despite the gender quest to impose the same—male—conception on all).

And here's some intriguing evidence. "Gender" predicts that, as discrimination diminishes, males and females will increasingly converge. But a study of 55 nations found that it was in the most liberal, democratic, equality-driven countries that divergence was greatest. The less the sexism, the greater the sex differences. Difference, this suggests, is evidence not of oppression but of choice; not socialization, not patriarchy, not false consciousness, not even pink t-shirts or personal pronouns … but female choice.

An evolutionary understanding shows that you can't have sex without sex differences. It is only within that powerful scientific framework—in which ideological questions become empirical answers—that gender can be properly understood. And, as the fluidity of "sexualities" enters public awareness, sex is again crucial for informed, enlightened discussion.

So for the sake of science, society and sense, bring back sex.

andrei_linde's picture
Theoretical Physicist, Stanford; Father of Eternal Chaotic Inflation; Inaugural Recipient, Fundamental Physics Prize

Thanks to online shopping, buying things now is much easier than before. You do not like something—you can always return it back for a full refund. A physicist might say that one can turn the arrow of time back for non-used purchases. A cosmologist may comment that we use a similar time reversal method in our research.

Indeed, to understand the origin of the universe we may study its present evolution, and then solve the Einstein equations back in time. What we find is very similar to what happens then we play a movie back. At present, galaxies move away from each other. Playing the movie back shows them moving closer together. Going further back in time, we see that at some point the density of matter becomes infinitely large. This is the cosmological singularity.

Solving the same equations forward in time, starting from the singularity, one finds that all matter appears from the singularity in a huge explosion, called the Big Bang. When the original cosmic fire cools down, matter condenses into galaxies, and they fly away from each other.

The possibility to go back and forth in time is a powerful tool, which helps us to visualize the evolution of the universe. The resulting picture is so convincing that many of us are using it even now in our lectures. It tells us, essentially, that our universe is a gift sent to us from the cosmological online retailer 14 billion years ago. We can track its delivery from its origin to the present time, running our calculations all the way back to the Big Bang.

Of course, we know that one cannot turn the arrow of time back because of the second law of thermodynamics. But the standard lore of the Big Bang theory was that the expansion of the universe was nearly adiabatic, and therefore approximately reversible. In particular, it was assumed that the total number of elementary particles in the universe did not change much during the cosmological evolution.

Some parts of this picture, however, were problematic. One may wonder, for example, who paid the bill for sending us more than 1090 elementary particles populating our universe, and who made the universe uniform and suitable for life?

In the beginning of the '80s, it was found that one can solve these problems if soon after the Big Bang there was a short stage of an exponentially fast expansion of the universe, called inflation. This idea experienced numerous modifications, and now we do not even think that the universe was born in fire. Instead, it could be created from a tiny vacuum-like speck of matter with special properties, weighting less than a gram and containing no elementary particles at all. Normal matter emerged only later, when the universe became exponentially large and its original vacuum-like state decayed.

During the last 35 years, many predictions of this theory have been verified by cosmological observations, but nevertheless it is somewhat difficult to get used to it, in part because it does not match the conventional picture of the time-reversible universe.

Indeed, let us try to follow the cosmological evolution back in time in this scenario. At first, galaxies move towards each other and the universe becomes very dense, just as in the standard Big Bang theory. But then something really weird happens. Suddenly, at the moment corresponding to the end of inflation, played back in time, all 1090 elementary particles populating our part of the universe completely disappear!... And then—not a flash, not a sound, nothing, except the exponentially shrinking and disappearing empty universe.

Thus, in this scenario, the movie played back in time has a nonsensical ending. Making 1090 elementary particles instantly vanish is absolutely impossible!

But there is nothing wrong about it, if one moves in the proper time direction. According to inflationary theory, all elementary particles in the universe were created by conversion of vacuum energy to matter during the decay of the original vacuum-like state. The theory of this process is well developed, but this process is irreversible: once particles are created, they cannot be un-created.

This is a part of the mechanism that makes the new theory work: We do not need anybody to send us a huge container with more than 1090 elementary particles; a tiny package with one gram of matter is more than enough. On its way to us, this package started growing up, unwrapping itself, producing billions of galaxies containing hundreds of billions of stars. We cannot, even in our imagination, follow this package back to its origin and watch all matter dissolving into nothingness. Our universe is unreturnable, so our only choice is to accept this gift and use it to the best of our abilities.

rolf_dobelli's picture
Founder, Zurich Minds; Journalist; Author, The Art of Thinking Clearly

Try building a tower by piling irregular stones on top of each other. It can be done, eight, nine, sometimes ten stones high. You need a stable hand and a good eye to spot each rock’s surface features. You find such man-made “Zen stone towers” on riverbanks and mountaintops. They last for a while until the wind blows them over. What is the relationship here between skill and height? Take relatively round stones from a riverbank. A child of two can build a tower two stones high. A child of three with improved hand-eye-coordination can manage three stones. You need experience to get to eight stones. And you need tremendous skill and a lot of trial and error to go higher than ten. Dexterity, patience and experience can get you only so far.

Now, try with a set of interlocking toy bricks as your stones. You can build much higher. More importantly: your three-year-old can build as high as you can. Why? Standardization. The stability comes from the standardized geometry of the parts. The advantage of skill is vastly diminished. The geometry of the interlocking bricks corrects the errors in hand-movement. But structural stability is standardization’s least impressive feat. Its advantages for collaboration are much more significant.

We have long appreciated the advantages of standardization in business. In 1840, the USA had more than 300 railroad companies, many with different gauges (the width between the inner sides of the rails). Many companies refused to agree on a standard gauge because of heavy sunk costs and the need for barriers to competition. Where two rail lines connected, men had to offload the cargo, sometimes store it and then load it onto new cars. In a series of steps, some by top-down enactment, but mostly by bottom-up coordination, the industry finally standardized gauges by 1886. Other countries saw similar “gauge wars.” England ended them by legislation in 1856.

In the last hundred years, every national government and supranational organization, and virtually every industry has created bodies to deal with standardization. They range from the International Organization for Standardization (ISO) to the World Wide Web Consortium (W3C) to bodies like the “Bluetooth Special Interest Group.” Their goals are always a combination of improved product quality, reputation, safety and interoperability.

What is the best way to achieve optimal standards? While game theory (coordination games) offers a vast body of knowledge, setting standards in the real world is not easy. However, the advantages are huge. Thus, landing at a relatively low local peak is vastly preferable to no coordination. Let’s call the sum of this theoretical and practical knowledge from management and game theory the “Special Theory of Standardization”—akin to Einstein’s “Special Theory of Relativity.”

However, standardization is a vastly more powerful concept, one that might lead to a “General Standardization Theory.” Let’s look at a few domains that are undergirded by standardization.

Take matter, which ranges from the elementary particles up the periodic table with their standardized atoms to an endless number of discrete molecules. Simple chunkiness doesn’t seem to be enough to build a universe. Apparently, that requires standardized chunks. From a “General Standardization Theory” point of view: Is this the optimal standard or just a local peak? Or take living matter. A cell can work only with standardized building blocks (amino acids, carbohydrates, DNA, RNA, etc.). Could something as complex as a cell ever work outside of standards? A “General Standardization Theory” might provide answers on the limits of complexity that can be achieved without standards.

Further up the chain, in biology, the question is how to get huge numbers of unrelated individuals to cooperate flexibly. Some anthropologists name the invention of religion as the solution. Others suggest the evolution of moral sentiments, the invention of written law or Adam Smith’s invisible hand. I suggest that standardization is at least part of the solution. People can cooperate in ample numbers without standards though all the known mechanisms. But, eventually, groups that use standards outpace groups that do not. Is there a threshold where cooperation breaks down without the injection of standards?

My hypothesis: yes, but it is much higher than Dunbar’s number of approximately 150 individuals, possibly in the tens of thousands. Interestingly, only homo sapiens devised standardization, no other animal. Then again, this advance took even humans a long time—until the fifth millennium BC, which brought the standardization of language (writing), the standardization of value (money) and standardized weights. 

priyamvada_natarajan's picture
Professor in Departments of Astronomy and Physics at Yale University, focusing on exotica in the universe—dark matter, dark energy, and black holes; Author, Mapping the Heavens

Every day you play with the light of the universe. 
Subtle visitor, you arrive in the flower and the water… 

These lines from one of Pablo Neruda’s poems captures the essence of light bendinggravitational lensing—that is ubiquitous in the cosmos. Re-conceptualizing gravity in his theory of general relativity, Einstein postulated the existence of space-time, a four-dimensional sheet to describe the universe. This is a beautiful marriage between geometry and physics, wherein all matter, both ordinary and exotic, in the universe would cause potholes in the fabric of space-time. So that matter would dictate how space-time curved, and space-time in-turn would determine how matter moves.

One important consequence of this formulation is the impact that the rumpled fabric of space-time would have on the propagation of light in the universe. Light emitted by distant galaxies will get deflected by the potholes generated by all the mass en-route that it encounters. This phenomenon of the bending of light is referred to as gravitational lensing. 

The consequence of lensing is that we would see the shapes of distant galaxies to be systematically distorted, and they would appear to be more elongated than their actual, true shapes. The strength of the lensing distortion is directly proportional to the number and depth of potholes encountered, namely, the detailed distribution of matter along the line of sight as well as cosmic distances. Also, the alignment matters, how the distant galaxy, the intervening matter that is causing the deflection and we are located. The strength of the light bending therefore also depends on the geometrical properties of space-time that are encapsulated in a set of cosmological parameters that characterize our universe.

Gravitational lensing by matter is somewhat similar to the optical focusing and defocusing produced convex and concave glass lenses that we are all familiar with from high school science experiments. Unlike the light bending produced by glass lenses though, in the case of gravitational lensing by matter in the universe, occasionally when the alignment in perfect, a single light beam might be cleaved into two—causing the appearance of a pair of images when in reality there is a single distant object that is the source of light. Production of multiple images of the same object is referred to as strong gravitational lensing. This happens only occasionally though. Most of the time, what is observed are weak distortions in the shapes of distant galaxies when viewed through a cosmic lens.

Lensing was one of the key predictions of the theory of general relativity proposed in 1915. It was proved in 1919, when light bending during a solar eclipse was detected. During a solar eclipse the pothole generated by the sun and the earth in space-time line up cause a measureable deflection in light from stars in the field. In this instance, there are no multiple images produced or distortions since stars are point sources. But stars in the field have a displaced apparent position due to the curvature of space-time during the line-up. And once the eclipse is over, the stars appear where they really are.

General relativity accurately predicted the displacement between the real and apparent positions. It was the verification of this prediction by the British Astronomer Arthur Eddington that made Einstein a celebrity and a household name. The more dramatic predictions of gravitational lensing—the strong distortions in the shapes of lensed distant galaxies, the production of multiple image of the same distant object have all been verified with data from ground based telescopes and the Hubble Space Telescope. In fact, the most convincing evidence for the existence of copious amounts of dark matter in the universe comes from the lensing effects produced by the unseen matter on the shapes of distant galaxies.   

gregory_benford's picture
Emeritus Professor of Physics and Astronomy, UC-Irvine; Novelist, The Berlin Project

Aging comes from evolution. It isn’t a bug or a feature of life; it’s an inevitable side effect.

Exactly why evolution favors aging is controversial, but plainly it does; all creatures die. It’s not a curse from God or imposed by limited natural resources. Aging arises from favoring short-term benefits, mostly early reproduction, over long-term survival, when reproduction has stopped.

Thermodynamics doesn’t demand senescence, though early thinkers imagined it did. Similarly, generic damage or "wear and tear" theories can’t explain why biologically similar organisms show dramatically different lifespans. Most organisms maintain themselves efficiently until adulthood and then, after they can’t reproduce anymore, succumb to age-related damage. Some die swiftly, like flies, and others like we humans can live far beyond reproduction.

Peter Medawar introduced the idea that ageing was a matter of communication failure between generations. Older organisms have no way to pass on genes that helped them survive, if they’ve stopped having offspring. Nature is a highly competitive place, and almost all animals in nature die before they attain old age. Those who do can’t pass newly arising, long-lived genes, so old age is naturally selected against. Genetically, detrimental mutations, these would not be efficiently weeded out by natural selection. Hence they would “accumulate” and, perhaps, cause all the decline and damage.

It turns out that the genes that cause ageing are not random mutations. Rather, they form tight-knit families that have been around as long as worms and fruit flies. They survive for good reasons.

In 1957 George Williams proposed his own theory, called antagonistic pleiotropy. If a gene has two or more effects, with one beneficial and another detrimental, the bad one exacts a cost later on. If evolution is a race to have the most offspring the fastest, then enhanced early fertility could be selected even if it came with a price tag that included decline and death later on. Because ageing was a side effect of necessary functions, Williams considered any alteration of the ageing process to be impossible. Antagonistic pleiotropy is a prevailing theory today, but Williams was wrong: we can offset such effects.

Wear and tear can be countered. Wounds heal, dead cells get replaced, claws regrow. Some species are better at maintenance and repair. Medawar did not agree with Williams that there were fundamental limitations on lifespan. He pointed out organisms like sea turtles live great spans, over a century, showing that aging is not a fundamental limitation. It arises from failure to repair, which can be addressed without implying unacceptable side effects. Some species, like us, have better maintenance and repair mechanisms. These can be enhanced.

Some pursued this by deliberately aging animals, like UC Irvine’s Michael Rose. Rose simply didn’t let fruit fly eggs hatch until half each fly generation had died. This eliminated some genes that promoted early reproduction but had bad effects later. Over 700 generations later, his fruit flies live over four times longer than the control flies. These Methuselahs are more robust than ordinary flies and reproduce more, not less, as some biologists predicted.

Delaying reproduction gradually extends the average lifetime. One side realization: University graduates mate and have children later in life than others. They are then slowly selecting for longevity in those better educated. Education roughly correlates with intelligence. Eventually, longevity will correlate more and more with intelligence.

I bought these Methuselah flies in 2006 and formed a company, Genescient, to explore their genetics. We discovered hundreds of longevity genes shared by both flies and humans. Up-regulating the functioning of those repair genes has led to positive effects in human trials.

So though aging is inevitable and emerges from antagonistic pleiotropy, it can be attacked. Recent developments point toward possibly major progress.

For example, a decade ago, the Japanese biologist Shinya Yamanaka found four crucial genes that reset the clock of the fertilized egg. However old parents are, their progeny are free of all marks of age; babies begin anew. This is a crucial feature of all creatures. By using his four genes, Yamanaka changed adult tissue cells into cells much like embryonic stem cells. Applying this reprogramming to adult tissue is tricky, but it beckons as a method of rejuvenating our own bodies.

So though evolution discards us as messengers to our descendants, once we stop reproducing, not all is lost. In the game of life, intelligence bats last.

barnaby_marsh's picture
Evolutionary dynamics scholar; Program in Evolutionary Dynamics, Harvard University; Visitor, Institute for Advanced Study, Princeton

You might not think of humility as a scientific concept, but the special brand of humility that is enshrined in scientific culture is deserving of special recognition for its unique heuristic transformative power. 

I reflect upon Sir Richard Southwood's invitation an incoming class of Oxford undergraduate biologists: "Remember, perhaps 50% of the facts that you learn may be not be quite right, or even wrong! It is your job to find out where new ideas are needed." In the scientist's toolkit of concepts for solving problems, scientific humility is among the most useful tool for finding the better pathway. It clarifies, it inspires, and it should be more widely known, practiced, and defended. 

Respect for scientific humility gives us license to question in ways all too rare in other professional fields. Allow yourself to ponder: When were you last surprised? When were you last wrong? As scientists, we are explorers and need to wonder and play. We need to have ample freedom to tinker and fail. In contrast to other fields and even the general culture, our field does not progress by the brash power of authority, by skillful interpretation, or by rhetorical style. It does not advance by the mountain of evidence we are able to amass in our favor, but rather, by how well ideas stand up to rigorous probing and humility. The humble scientist suspends judgment, remembering that many breakthroughs start with "What if?" Am I absolutely sure? How do I know that? Is there a better way? The results are compelling.  Even the most complex systems become more orderly as different pieces of knowledge fall into place. 

As we advance in our scientific careers, it is all too easy to feel overconfident in what we know, and how much we know. The same pressures that face us in our everyday life wait to ensnare us in professional scientific life. The human mind looks for certainly, and finds comfort in parsimony. We see what we want to see, and we believe what makes intuitive sense. We avoid the complex and difficult, and the unknown. Just look across the sciences, from biochemistry to ecology, where multiple degrees of freedom make many problems seemingly intractable. But are they? Could new tools of computation and visualization enable better models of the behavior of individuals and systems? The future belongs to those brave enough to be humble about how little we know, and how much there is that is remaining to be discovered. 

Scientific humility is the key that opens a whole new possibility space-  a space where being unsure is the norm; where facts and logic are intertwined with imagination, intuition, and play. It is a dangerous and bewildering place where all sorts of untested and unjustified ideas lurk. What is life? What is consciousness? How can we understand the complex dynamics of cities? Or even my goldfish bowl?  Go there are one can see quickly why when faced be uncertainty, most of us would rather quickly retreat. Don't. This is the space where amazing things happen. 

The clearest and most compelling message from the history of science is that old ideas, even very good old ideas, are regularly augmented or even replaced altogether with new ideas. As the case of classical and quantum mechanics shows, examples can even be highly counterintuitive. Right now, someone somewhere is beginning to question something that we all take for granted, and the result will radically change our future. 

bruce_parker's picture
Visiting Professor, Stevens Institute of Technology; Author, The Power of the Sea: Tsunamis, Storm Surges, and Our Quest to Predict Disasters

There is very little appreciation among the general public (and even among many scientists) of the great complexity of the mechanisms involved in climate change. Climate change significantly involves physics, chemistry, biology, geology, and astronomical forcing. The present political debate centers on the effect of the increase in the amount of carbon dioxide in the Earth’s atmosphere since humankind began clearing the forests of the world and especially began burning huge quantities of fossil fuels, but this debate often ignores (or is unaware of) the complex climate system that this increase in carbon dioxide is expected to change (or not change, depending on one’s political viewpoint).

The Earth’s climate has been changing over the 4.5 billion years of its existence. For at least the past 2.4 million years the Earth has been going through regular cycles of significant cooling and warming in the Northern Hemisphere that we refer to as ice age cycles. In each cycle there is a long glacial period of slowly growing continental ice sheets over large portions of the Northern Hemisphere (which are miles thick, accompanied by a sea level drop of 300+ feet), eventually followed by a rapid melting of those ice sheets (and accompanying rise in sea level) which begins a relatively short interglacial period (which we are in right now). For about 1.6 million years a glacial-interglacial cycle averaged about 41,000 years. This can be seen in the many excellent paleoclimate data sets that have been collected around the world from ice cores, sediment cores from the ocean bottom, and speleothems in caves (stalactites and stalagmites). These data records have recently become longer and with higher resolution (in some cases down to decadal resolution). Changes over millions of years in temperature, ice volume of the ice sheets, sea level, and other parameters can be determined from the changing ratios of various isotope pairs (e.g., 18O/16O, 13C/14C, etc.) based on an understanding on how these element isotopes are utilized differently in various physical, chemical, biological, and geological processes. And changes in atmospheric carbon dioxide and methane can be measured in ice cores from the Greenland and Antarctic ice sheets.

In these data the average glacial-interglacial cycle matches the oscillation of the Earth's obliquity (the angle between the Earth's rotational axis and its orbital axis). The Earth's obliquity oscillates (between 22.1 and 24.5 degrees) on a 41,000-year cycle, causing a very small change in the spatial distribution of insolation (the sunlight hitting and warming the Earth). However, beginning about 0.8 million years ago the glacial-interglacial cycle changed to approximately 82,000 years, and then more recently it changed to approximately 123,000 years. This recent change to longer glacial-interglacial cycles has been referred to by many scientists as the "100,000-year problem" because they do not understand why this change occurred.

But that is not the only thing that scientists do not understand. They still do not understand why a glacial period ends (an interglacial period begins) or why a glacial period begins (an interglacial period ends). The most significant changes in climate over the Earth's recent history are still a mystery.

For a while scientists were also not sure how such a small variation in insolation could lead to the build up of those major ice sheets on the continents of the Northern Hemisphere. Or how it could lead to their melting. They eventually came to understand that there were “positive feedback mechanisms” within the Earth's climate system that slowly caused these very dramatic changes. Perhaps a better understanding of these positive feedback mechanisms will communicate to the public a better appreciation of the complexity we are dealing with in climate change.

The most agreed upon and easiest to understand positive feedback mechanism involves the reflection of sunlight (albedo) from the large ice sheets that build up during a glacial period. Ice and snow reflect more light back into space (causing less warming) than soil or vegetation or water. Once ice sheets begin to form at the beginning of a glacial period more sunlight is reflected back into space so there is less warming of the Earth and the Earth grows colder and the ice sheets expand (covering more soil, vegetation, and water) and thus further increasing reflection into space, leading to further cooling and larger ice sheets, and so on. There is also a positive feedback during interglacial periods, but in the warming direction, namely, as ice sheets melt, more land or ocean is exposed, which absorbs more light and further warms the Earth, further reducing the ice sheets, leading to more warming, and so on.

Carbon dioxide in the atmosphere also varies over a glacial-interglacial cycle. Its concentration is lower during cold glacial periods and higher during warm interglacial periods. There are various processes which can cause carbon dioxide to decrease during cold periods and increase during warm periods (involving temperature effects on carbon dioxide solubility in the ocean, changes in biological productivity in the ocean and on land, changes in salinity, changes in dust reaching the ocean, etc.). So the debate has been whether carbon dioxide causes the warmth that produces an interglacial period or whether something else causes the initial warming which then leads to an increase in carbon dioxide. Either way there is another important positive feedback here involving carbon dioxide because the temperature changes and the changes in carbon dioxide are in the same direction.

There are other possible positive feedback mechanisms and much more detail that cannot be included in such a short essay. But the point here is that to some degree climate scientists understand how, through various positive feedback mechanisms, glacial periods get colder and colder and the ice sheets expand, and interglacial periods get warmer and warmer with melting ice sheets. But the big question remains unsolved. No one has explained how you switch from glacials to interglacials and then back to glacials. No one knows what causes a glacial termination (the sudden warming of the Earth and melting of the ice sheets) or what causes a glacial inception (the not quite as sudden cooling of the Earth and growing of ice sheets). And if you don't understand that, you don't completely understand climate change, and your climate models are lacking a critical part of the climate change picture.

Carbon dioxide may have been higher at various points in the Earth’s history but there is evidence that it has never risen to the its present levels as quickly as it has during humankind’s influence. Is it the quantity of carbon dioxide in the atmosphere that is important or is it the speed at which it has increased to that high level? And what influence does such an increase in carbon dioxide occurring at the end of an interglacial period have on the ice age cycle? The key to accurately assessing the degree to which humankind has affected climate change (and especially what the future consequences will be) is to make the climate models as accurate as possible. Which means including all the important physical, chemical, biological, and geological processes in those models. The only way to test those models is to run them in the past and compare their predictions to the paleoclimate data sets that have been meticulously acquired. Right now these models cannot produce glacial terminations and glacial inceptions that accurately match the paeloclimate data. That is the next big hurdle. It would be helpful if those debating anthropogenic global warming could gain a little more understanding about the complexity of what they are debating.

s_abbas_raza's picture
Founding Editor, 3QuarksDaily.com

Suppose we choose an American woman at random. Given just a couple of numbers that we know characterize American women's heights, we can be 95% certain (meaning, we will be wrong only once out of every 20 times we do this) that her height will be between 4 feet 10 inches and 5 feet 10 inches. It is the statistical concept of standard deviation which allows us to say this. To show how, let us take such a data set: the heights of 1,000 randomly chosen American women, and let us plot this data as points on a graph where the x-axis shows height from 0 to 100 inches, and the y-axis shows the number of women who are of that height (we use only whole numbers of inches). If we then connect all these points into a smooth line, we will get a curve that is bell-shaped. Some data sets that have this characteristic shape, including this one, are said to have what is called a normal (also known as Gaussian) distribution, and it is the way in which a great many kinds of data are distributed.

For data that is strictly normally distributed, the highest point on our graph (in this example, the height in inches which occurs most frequently, and which is called the mode) would be at the average (mean) height for American women, which happens to be 64 inches, but with real-world data the mean and the mode can differ slightly. As we move rightward from that peak to greater height along the x-axis, the curve will start sloping down, become steeper, and then gradually become less steep and peter out to zero as it hits the x-axis just after where the tallest woman (or women) in our sample happens to be. The same exact thing happens on the other side as we go to smaller heights. And this is how we get the familiar symmetrical bell-shape. Suppose that the height of Japanese women is also 64 inches on average but it varies less than that of American women because Japan is less ethnically and racially diverse than America. In this case, the bell-shaped curve will be thinner and higher and will fall to zero more quickly on either side. Standard deviation is a measure of how spread out the bell-curve of a normal distribution is. The more the data is spread out, the greater the standard deviation.

The standard deviation is half the distance from one side of the bell curve to the other (so it has the same units as the x-axis), where the curve is about 60% of its maximum height. And it can be shown that about 68% of our data points will fall within plus or minus one standard deviation around the mean value. So in our example of the heights of American women, if the standard deviation is 3 inches, then 68% of American women will have a height between 61 and 67 inches. Similarly, 95% of our data points will fall within two standard deviations around the mean, so in our case, 95% of women will have heights between 58 and 70 inches. And similarly, it can be calculated that 99.7% of our data points will fall within three standard deviations around the mean, and so on for even greater degrees of certainty.

The reason that standard deviation is so important in science is that random errors in measurement usually follow a normal distribution. And every measurement has some random error associated with it. For example, even with something as simple as just weighing a small object with a scale, if we weigh it 100 times, we may get many slightly different values. Suppose the mean of all our observations of its weight comes out to 1352 grams with a standard deviation of 5 grams. Then we can be 95% certain that the object's actual weight is between 1342 and 1362 grams (mean weight plus or minus two standard deviations). You may have heard reports before the discovery of the Higgs Boson at CERN in 2012 that they have a "3 sigma" result showing a new particle.  (The lower case Greek letter sigma is the conventional notation for standard deviation, hence it is often also just called "sigma.") The "3 sigma" meant that we can be 99.7% certain the signal is real and not a random error. Eventually a "5 sigma" result was announced for the Higgs particle on July 4th, 2012 at CERN, and that corresponds to a 1 in 3.5 million chance that what they detected was due to random error.

It is interesting that measurement error (or uncertainty in observations) is such a fundamental part of science now but it is only in the 19th century that scientists started incorporating this idea routinely in their measurements. The ancient Greeks, for example, while very sophisticated in some parts of their mathematical and conceptual apparatus, almost always reported observations with much greater precision than was actually warranted, and this often got amplified into major errors. A quick example of this is Aristarchus's impressive method of measuring the distance between the Earth and the Sun by measuring the angle between the line of sight to the Moon when it is exactly half full, and the line of sight to the Sun. (The line between the Earth and the Moon would then be at 90 degrees to the line between the Moon and the Sun, and along with the line from the Earth to the Sun, this would form a right triangle.) He measured this angle as 87 degrees which told him the distance from the Earth to the Sun is 20 times greater than the distance from the Earth to the Moon.

The problem is that the impeccable geometric reasoning he used is extremely sensitive to small errors in this measurement. The actual angle (as measured today with much greater precision) is 89.853 degrees, which gives a distance between the Earth and the Sun as 390 times greater than the distance between the Earth and the Moon. Had he made many different measurements and also had the concept of standard deviation, Aristarchus would have known that the possible error in his distance calculation was huge, even for a decent reliability of two standard deviations, or 95% certainty in measuring that angle.

john_markoff's picture
Pulitzer Prize-winning Reporter, The New York Times; Author, Whole Earth

First demonstrated in 1999 by a group of researchers led by physicist David R. Smith, metamaterials are now on the cusp of transforming entire industries. The term generally refers to synthetic composite materials that exhibit properties not found in nature. What Smith demonstrated was the ability to bend light waves in directionsdescribed as “left-handed”not observed in natural materials. 

As a field of engineering metamaterials are perhaps the clearest evidence that we are in the midst of a materials revolution that goes far beyond the impact of computing and communications.   

The concept has been speculated about for more than a century, particularly in the work of Russian physicist Victor Veselago during the 1960s. However, the results of Smith’s group touched off a new wave of excitement and experimentation in the scientific community. The notion captured the popular imagination briefly some years ago with the discussion of the possibility of invisibility cloaks, however, today it is having a more practical impact across the entire electromagnetic spectrum.  

It may soon transform markets such as the automotive industry, where there is a need for less expensive and more precise radars for self-driving vehicles. Metamaterial design has also begun to yield new classes of antennas which will be smaller in size while also being more powerful, tunable, and directional. 

Some of the uses are novel, such as a transparent coating that can be applied to cockpit windshields, serving to protect pilots from harassing lasers beamed from the ground. This is already a commercial reality. Other applications are more speculative, yet still promising. Several years ago scientists at the French construction firm Menard published a paper on describing a test of a novel way of counteracting the effects of an earthquake from a metamaterial grid of empty cylindrical columns bored into soil. They reported that they were able to measure a significant dampening of a simulated earthquake with the array of columns. 

While Harry Potter invisibility shields may not be possible, there is clear military interest in metamaterials for new stealth applications. DARPA has a project exploring the possibility of armor for soldiers that would make them less visible, and it is possible that such technology could be applied to vehicles such as tanks. 

Another promising area that may soon be transformed by the ability to create what is known as a “negative refractive index”not found in nature, might be so-called “superlens” enabling microscopes that reach past the resolving power of today’s scientific instruments. They also promise the ability to filter and control sound in news ways. That holds out the possibility of new kinds of ultrasound devices, peering into the human body with enhanced three-dimensional resolution. 

Researchers note that the application of these synthetic materials far outstrip the imagination and that new applications will appear as engineers and scientists rethink existing technologies. 

Perhaps Harry Potter invisibility is far-fetched, but several years ago Xiang Zhang, a UC Berkeley nanoscientist, speculated that it might be possible to make dust particles disappear in semiconductor manufacturing. By inserting a metamaterial layer in the optical path of the exotic light waves now used to etch molecular-scale transistors, it might be possible to make the contamination effectively invisible. That in turn would lead to a significant jump in the yield of working chips. And it conceivably might put the computer industry back on the Moore’s law curve of ever-more-powerful computing.

eric_r_weinstein's picture
Mathematician and Economist; Managing Director of Thiel Capital

We are told that we are entitled to our own opinions but not our own facts. This leaves out the observation that the war for our minds and attention is now increasingly being waged over neither facts nor opinions but feelings.

In an era in which anyone can publish anything, the quest to control information has largely been lost by institutions, with a race on to weaponize empathy by understanding its basis in linguistics and tweaking the social media algorithms which now present our world to us accordingly. As the theory goes, it is not that we don’t have our own opinions so much as that we have too many contradictory ones, and it is generally our emotional state alone which determines on which ones we will predicate action or inaction.

Russell Conjugation (or “emotive conjugation”) is a presently obscure construction from linguistics, psychology and rhetoric which demonstrates how our rational minds are shielded from understanding the junior role factual information generally plays relative to empathy in our formation of opinions. I frequently suggest it as perhaps the most important idea with which almost no one seems to be familiar, as it showed me just how easily my opinions could be manipulated without any need to falsify facts. Historically, the idea is not new and seems to have been first defined by several examples given by Bertrand Russell in 1948 on the BBC without much follow up work, until it was later rediscovered in the internet age and developed into a near data-driven science by pollster Frank Luntz beginning in the early 1990s.

In order to understand the concept properly you have to appreciate that most words and phrases are actually defined not by a single dictionary description, but rather two distinct attributes:

I) The factual content of the word or phrase.
II) The emotional content of the construction.

Where words can be considered “synonyms” if they carry the same factual content (I) regardless of the emotional content (II). This however leads to the peculiar effect that the synonyms for a positive word like “whistle-blower” cannot be used in its place as they are almost universally negative (with “snitch,” “fink,” “tattletale” being representative examples). This is our first clue that something is wrong, or at least incomplete with our concept of synonym requiring an upgrade to distinguish words that may be content synonyms but emotional antonyms.

The basic principle of Russell Conjugation is that the human mind is constantly looking ahead well beyond what is true or false to ask “What is the social consequence of accepting the facts as they are?”  While this line of thinking is obviously self-serving, we are descended from social creatures who could not safely form opinions around pure facts so much as around how those facts are presented to us by those we ape, trust or fear. Thus, as listeners and readers our minds generally mirror the emotional state of the source, while in our roles as authoritative narrators presenting the facts, we maintain an arsenal of language to subliminally instruct our listeners and readers on how we expect them to color their perceptions. Russell discussed this by putting three such presentations of a common underlying fact in the form in which a verb is typically conjugated:

I am firm. [Positive empathy]
You are obstinate. [Neutral to mildly negative empathy]
He/She/It is pigheaded.  [Very negative empathy]

In all three cases, Russell was describing people who did not readily change their minds. Yet by putting these descriptions so close together and without further factual information to separate the individual cases, we were forced to confront the fact that most of us feel positively towards the steadfast narrator and negatively towards the pigheaded fool, all without any basis in fact.

Years later, the data-driven pollster Frank Luntz stumbled on much the same concept unaware of Russell’s earlier construction.  By holding focus-groups with new real time technology that let participants share emotional responses to changes in authoritative language, Luntz was lead to make a stunning discovery that pushed Russell’s construction out of the realm of linguistics and into the realm of applied psychology. What he found was extraordinary: many if not most people form their opinions based solely on whatever Russell conjugation is presented to them and not on the underlying facts. That is, the very same person will oppose a “death tax” while having supported an “estate tax” seconds earlier even though these taxes are two descriptions of the exact same underlying object. Further, such is the power of emotive conjugation that we are generally not even aware that we hold such contradictory opinions. Thus “Illegal aliens” and “undocumented immigrants” may be the same people, but the former label leads to calls for deportation while the latter one instantly causes many of us to consider amnesty programs and paths to citizenship.

If we accept that Russell Conjugation keeps us from even seeing that we do not hold consistent opinions on facts, we see a possible new answer to a puzzle that dates from the birth of the web: “If the internet democratized information, why has its social impact been so much slower than many of us expected?” Assuming that our actions are based not on what we know but upon how we feel about what we know, we see that traditional media has all but lost control of gate-keeping our information, but not yet how it is emotively shaded. In fact, it is relatively simple to write a computer program to crawl factually accurate news stories against a look-up table of Russell conjugates to see the exact bias of every supposedly objective story.

Thus the answer to the puzzle of our inaction it seems may be that we built an information superhighway for all, but neglected to build an empathy network alongside it to democratize what we feel. We currently get our information from more sources than ever before, but, at least until recently, we have turned to traditional institutions to guide our empathy. Information, as the saying goes, wants to be free. But we fear authentic emotions will get us into trouble with our social group, and so continue to look to others to tell us what is safe to feel.

paul_j_steinhardt's picture
Albert Einstein Professor in Science, Departments of Physics and Astrophysical Sciences, Princeton University; Coauthor, Endless Universe

2016 was the Year of the “Big Bounce.  

Everyone has heard of the Big Bang, the idea that, about fourteen billion years ago, the universe emerged from nothingness through some sudden quantum event into an expanding space-time filled with hot matter and radiation. Many know that the Big Bang alone cannot explain the remarkably uniform distribution of matter and energy observed today or the absence of curves and warps that one might expect after a sudden quantum event. In order to account for these observationsan enhancement has been added, a brief epoch of superluminal expansion, known as inflation, that immediately follows the bang. Inflation added to the Big Bang was supposed to explain how the turbulent and twisted conditions following a bang could have been stretched out, leaving behind a smooth universe except for a pattern of tiny variations in temperature and energy.  

But neither the Big Bang nor inflation are proven ideas, and there are good reasons to consider an alternative hypothesis in which the Big Bang is replaced by a Big Bounce. In a universe created by a Big Bang followed by inflation, there immediately arises a number of obvious questions: 

  • What caused the Big Bang?  

  • If the Big Bang is a quantum-dominated event in which space and time have no certain definition, how does the universe ever settle down to a classical space-time described by Einstein’s theory of general relativity in time for inflation to begin? 

  • Even if the universe manages to settle into a classical space-time, why should it do so in the exponentially fine-tuned way required for inflation to begin? 

  • Why don't we observe large-amplitude gravitational waves? If some way were found for inflation to begin, one would expect that the same high-energy inflationary processes that produce fluctuations in temperature and density also generate gravitational waves with large enough amplitude to have been detected by now. 

  • How does inflation end? The current idea is that inflation is eternal and that it transforms the universe into a “multi-mess” consisting of infinitely many patches or universes that can have any conceivable properties with no principle to determine which is more probable.  

All of these questions that have been known for decades and that theorists have failed to answer despite best efforts immediately become moot if the Big Bang is replaced by a Big Bounce. The universe need not ever be dominated by quantum physics and the large-scale structure of the universe can be explained by non-inflationary process that occurred during the period of contraction leading up to the bounce. This includes avoiding the multi-mess and producing fluctuations in temperature and density without producing large-amplitude gravitational waves or isocurvature fluctuations that would conflict with observations. 

If that is the case, then why haven’t astrophysicists and cosmologists jettisoned the Big Bang and embraced the Big Bounce? The answer in part is that many astrophysicists and cosmologists understand inflation as it was first introduced in the 1980s, when it was sold as a cure-all, and do not appreciate how thorny the questions listed above really are. But perhaps the bigger reason is that, before 2016, there was no theory of the bounce itself and so no Big Bounce theory to compare to. Attempts to construct examples of bounces consistent with quantum physics and general relativity generally led to instabilities and mathematical pathologies that made them implausible. Some even believed they could prove that bounces are impossible. 

2016 was the Year of the Big Bounce because, depending on how one counts them, at least four different theories for producing a stable, non-pathological bounces have been introduced by different groups around the world. Each uses different sets of reasonable assumptions and principles and each suggests a smooth transition from contraction to expansion is possible. This is not the place to go into details, but let me briefly describe one specific case discovered by Anna Ijjas and me at the Princeton Center for Theoretical Science earlier this year. In this theory of the Big Bounce, quantum physics is always a minor player, even near the bounce. At each moment, the evolution of the universe is well-described by classical equations that are well-defined and can be solved on a computer using the same sort of tools of numerical general relativity that were introduced to study mergers of black holes and employed in the recent discovery of gravitational waves by the LIGO collaboration. With this approach, the Big Bounce becomes calculable and prosaic.  

And once one knows that a Big Bounce is possible, it is hard to go back to considering the Big Bang again. The notion that time has a beginning was always strange, and, as the list above illustrates, it has created more problems in explaining the universe than it has solved.  

The time is ripe for the Big Bounce to become the new meme of cosmology. 

richard_prum's picture
Evolutionary Ornithologist, Director of Franke Program, Yale University; Author, The Evolution of Beauty

In what ways are you related to the contents of your salad? Or to the ingredients of a slice of pepperoni pizza? Or whatever your next meal might be? Of course, consumption is an ecological relationship. Your body digests and absorbs the nutrients from your food, which provide energy for your metabolism and material components for your cells. But another fundamental kind of relationship is more cryptic, and in many ways more profound.  

The answer comes from one of Charles Darwin's least appreciated revolutionary ideas. Darwin is, of course, duly famous for his discovery of the process of natural selection, which is among the most successful concepts in the history of science. Darwin also discovered the process of sexual selection, which he viewed as an independent mechanism of evolution. But Darwin was the first person ever to imply that all of life came from a single or a few common origins, and had diversified over time through speciation and extinction to become the richness of the biotic world we know today. Darwin referred to the history of this diversification as "the great Tree of Life," but today biologists refer to it as phylogenyIt may be Darwin's greatest empirical discovery.  

Network science is an exploding field of study. Network analysis can be used to trace neural processes in the brain, uncover terrorist groups from cellphone metadata, or understand the social consequences of cigarette smoking and vaping among cliques of high school students. Biology is a network science. Ecology investigates the food web, while genetics explores the genealogy of variations in DNA sequences. But few understand that evolutionary biology is also a network science. Phylogeny is a rooted network in which the edges are lineages of organisms propagating over time and the vertices are speciation events. The root of the phylogenetic network is the origin of life as we know it—diagnosable by the existence of RNA/DNA-based genetic systems, left-handed amino acids, proteins, and sugars, and (likely) a lipid bilayer membrane. These are the features of the trunk of Darwin's great Tree of Life.  

Thus, you are related to the lettuce, the anchoviesthe Parmesan, and the chicken eggs in your Caesar salad through the historical network of shared common ancestry. Indeed, there is nothing you could think of as a food that cannot be placed on the Tree of Life. Being a member of this network is currently the most successful definition of life.  

Darwin should be world famous for his discovery of phylogeny. But, just as Einstein's discovery of the quantum nature of energy was eclipsed biographically by his discovery of the theory of relativity, Darwin had the mishap of discovering natural selection tooDespite its excellent intellectual roots, phylogeny remains underappreciated today because it was largely suppressed and ignored for most of the 20th century. The architects of the "New Synthesis" in evolutionary biology were eager to pursue an ahistorical science analyzing the sorting of genetic variations in populations. This required shelving the question of phylogeny for some decades. As population genetics became more successful, phylogeny came to be viewed merely as the residuum left behind by adaptive process. Phylogeny became uninteresting, not even worth knowing.  

But the concept of phylogeny has come roaring back in recent decades. Today, discovering the full details of the phylogenetic relationships among the tens of millions of extant species and their myriad of extinct relatives is a major goal of evolutionary biology. Just like a basketball tournament with sixty-four teams has sixty-three games, the phylogeny of tens of millions of living species must have tens of millions minus one branches. So, biologists have a lot of work ahead. Luckily genomic tools, computing power, and conceptual advances make our estimate of organismal phylogenies better and more confident all the time.  

Despite a lot of empirical progress, the full implications of the concept of phylogeny have yet to been appreciated in evolutionary biology and the culture at large. For example, the concept of homology—similarity relation among organisms and their parts due to common ancestrycan only be understood in terms of phylogeny. Infectious diseases are caused by various species from different branches on the Tree of Life. Defending against them requires understanding how to slow them down without hurting ourselves, which is greatly facilitated by understanding where they and we fit in the historical network of phylogeny.  

Billions of dollars of biomedical research funds are invested into a few model organisms like E. coliyeast, round wormsfruit flies, and mice.  But, like the diverse contents of your salad, these scientific results are usually not consumed with any awareness of the complex hierarchical implications of the phylogenetic context.  

Perhaps the most important implication of the singular phylogenetic history of life is its contingency. Given the pervasiveness of extinction in pruning the network, our existence, or the existence of any other extant species, is only possible as a result of an unfathomable number of historically contingent eventsspeciation events, evolutionary changes within lineages, and survival. The history of any one branch connects to the whole, individualized history of life.  

brian_christian's picture
Author, The Most Human Human; Co-author (with Tom Griffiths), The Alignment Problem

The axolotl is a peculiar amphibian: it never undergoes metamorphosis, retaining its gills and living in water through its entire life, a kind of tadpole with feet. 

Studying the axolotl in the late 19th century, the German zoologist Julius Kollmann coined the term “neoteny to describe this process—the retention of youthful traits into adulthood. 

Neoteny has gone on to have a provocative history within biology. Evolutionary biologists throughout the twentieth century, including Stephen Jay Gould, discussed and debated neoteny as one of the mechanisms of evolution and one of the distinguishing features of Homo sapiens in particular. Compared to our fellow primates, we mature later, more slowly, and somewhat incompletely: we stay relatively hairless, with larger heads, flatter faces, bigger eyes. Human adults, that is, strongly resemble chimpanzee infants. 

(Intriguingly, our typical depiction of an even more highly evolved species than ourselves—namely, aliens—is one of enormous heads, huge eyes, tiny noses: namely, a species even more neotenous than we are.) 

Neoteny, depending on how far one wishes to extend the term beyond considerations of pure anatomy, also functions as a description of human cognitive development and human culture. A baby gazelle can outrun a predatory cheetah within several hours of being born. Humans don’t even learn to crawl for six months. We’re not cognitively mature (or allowed to operate heavy machinery) for decades. 

Indeed, humans are, at the start of our lives, among the most uniquely useless creatures in the entire animal kingdomParadoxically this may be part and parcel of the dominant position we hold todayvia the so-called Baldwin effect, where we blend adaptation by genetic mutation with adaptation by learning. We are, in effect, tuning ourselves to our environment in software, rather than in hardware. The upside is we can much more rapidly adapt (including genetically) to selective pressures. The downside: longer childhoods. 

Human culture itself appears to progress by way of neotenyThirteen-year-olds used to be full-fledged adults, working the fields or joining the hunting parties. Now we grouse that “grad school is the new college,” our careers beginning ever later, after ever-lengthening periods of study and specialization. 

Computer scientists speak of the “explore/exploit” tradeoffbetween spending your energy experimenting with new possibilities and spending it on the surest bets you’ve found to dateOne of the critical results in this area is that in problems of this type, few things matter so much as where you find yourself on the interval of time available to you. 

The odds of making a great new discovery are highest the greener you are—and the value of a great discovery is highest when you’ve got the most time to savor it. Conversely, the value of playing to your strengths, going with the sure thing, only goes up over time, both as a function of your experience and as a function of time growing scarce. This naturally puts all of us, then, on an inevitable trajectory: from play to excellence, from craving novelty to preferring what we know and love. The decision-making of the young—whether it’s who to spend time with, where to eat, or how to work and play—really should differ from the decision-making of the old. 

And yet even here there is an argument to be made for neoteny, of a kind of conscious and deliberate sort. 

To imagine ourselves as making choices not only on our own behalf, but on behalf of our peers, successors, and descendants, is to place ourselves much more squarely at the beginning of the interval, an interval much vaster than our lifetime. The longer a future we feel ourselves to be stewarding, the more we place ourselves in the youth of the race. 

This offers something of a virtuous circle. A host of results in neuroscience and psychology show that the brain appears to mark time by measuring novel events in particular. “Life is long,” we think, and the effort of exploration is worthwhile. In turn, the explorer is immune to the feeling of time speeding up as they age. The mindset is self-fulfilling. To lengthen youthfulness is to lengthen life itself.

kurt_gray's picture
Associate Professor of Psychology, University of North Carolina, Chapel Hill; Co-author (with Daniel Wegner), The Mind Club

Middle class Americans don’t live like kingsthey live better than kings.   

If you showed Henry VIII the average American’s living conditions, he would be awe-struck. While most Americans do not have giant castles or huge armies, we have luxuries that the royalty of yesteryear could scarcely dream about: big screen TVs and the Internet, fast cars and even faster planes, indoor plumbing and innerspring mattresses, and vastly improved medical care. Despite this incredible standard of living, most of us don’t feel like kings. Instead, we feel like paupers because of relative deprivation. 

Relative deprivation is that idea that people feel disadvantaged when they lack the resources or opportunities of another person or social group. An American living in a trailer park has an objective high standard of living compared with the rest of the world and the long tail of human history: they have creature comforts, substantial freedom of choice, and significant safety. Nevertheless, they feel deprived because they compare their lives with glamorous celebrities and super-rich businessmen. Relative deprivation tells us that social and financial status is more a feeling rather factspelling trouble for traditional economics. 

Economists largely agree that economic growth is good for everyone. In lifting the profits of corporations and the salaries of CEOs, the engine of capitalism also pulls up the lifestyle of everyone else. Although this idea is objectively true—standards of living are generally higher when the free market reigns—it is subjectively false. When everyone gets richer, no one feels better off because, well, everyone gets richer.  What people really want is to feel richer than everyone else.  

Consider an experiment done by economist Robert Frank. He asked people to choose between two worlds. In World 1, you make $110,000/year and everyone else makes $200,000; in World 2, you make $100,000/year and everyone else makes $85,000. Although people have more objective purchasing power in World 1, most people chose World 2 to feel relatively richer than others.  

The yearning for relative status seems irrational, but it makes sense from an evolutionary perspective. We evolved in small groups where relative status determined everything, including how much you could eat, and whether you could procreate. Although most Americans can now eat and procreate with impunity, we haven’t lost that gnawing sensitivity to status. If anything, our relative status is now more important. Because our basic needs are met, we have a hard time determining whether we’re doing well, and so we judge ourselves based upon our place in the hierarchy.   

Relative deprivation can make sense of many curious human behaviors, such as why exposure to the rich makes middle class people get sick and take dangerous risks. It also helps to understand the election of 2016.   

Society today is economically more powerful than it was in the 1950s, with our money buying much more. In 1950, a 13” color TV cost over $8,000 (adjusting for inflation), whereas today a 40” LCD TV costs less than $200.  Despite this objective improvement, one demographic group—white men without a college diploma—has seen a substantial relative decrease in their economic position since the 1950sIt is this relatively deprived group who really wanted to Make America Great Again—not to have expensive TVs, but to relive the days when they had a greater status than other groups.   

The real problem with relative deprivation is that—while it can be pushed aroundit can never be truly solved.  When one group rises in relative richness, another group feels worse because of it.  When your neighbors get an addition or a new convertible, your house and your car inevitably look inadequate. When uneducated white men feel better, then women, professors, and people of color inevitably feel worse.  Relative deprivation suggests that economic advancement is less like a rising tide and more like see-saw.   

Of course, one easy trick around relative deprivation is to change your perspective; each of us can look around for someone who is relatively less successful.  Unfortunately, there’s always someone at the very bottom and they’re looking straight up, wishing that they lived like a king.

daniel_goleman's picture
Psychologist; Author (with Richard Davidson), Altered Traits

Empathy has gotten a bad reputation of late, largely undeserved. That negative spin occurs because people fail to understand the nuanced differences between three aspects of empathy. 

The first kind, cognitive empathy, allows me to see the world through your eyes: to take your perspective and understand the mental models that make up your lens on events. The second kind, emotional empathy, means I feel what your feel; this empathy gives us an instant felt sense of the other person’s emotions. 

It’s the third kind, empathic concern, that leads us to care about the other person’s welfare, to want to help them if they are in need. Empathic concern forms a basis for compassion. 

The first two, while essential for intimate connection, can also become tools in the service of pure self-interest. Marketing and political campaigns that manipulate people’s fears and hatreds require effective cognitive and emotional empathyas do conmen. And, perhaps, artful seductions. 

It’s empathic concern—caring about the other person’s welfare —that puts these two kinds of empathy in the service of a greater good. 

Brain research at the University of Chicago by Jean Decety and at the Max Planck Institute by Tania Singer has established that each of these varieties of empathy engages its own unique neural circuitry. Neocortical circuitry, primarily, undergirds cognitive empathy. Emotional empathy stems from the social networks that facilitate rapport, and tune us into another person’s painmy brain’s pain circuitry activates when I see you are in pain.  

The problem here with empathic empathy: if your suffering makes me suffer, I can feel better by tuning out. That’s a common reaction, and major reason so few people go down the whole arc from attunement and emotional empathy to caring and helping the person in need. 

Empathic concern draws on the mammalian circuitry for parental caretaking—the love of a parent for a child. Research finds that lovingkindness meditationwhere you wish wellbeing for a circle expanding outward from yourself and your loved ones, people you know and strangers, and finally expanding  to the entire worldboosts feelings of empathic concern and strengthens connectivity within the brain’s caretaking circuits. 

On the other hand, a longitudinal study in Norway found that seven-year-olds who showed little empathic concern for their own mothers had an unusually high incidence of being jailed as felons in adulthood. 

When we think of empathy as a spur to prosocial acts, it’s empathic concern we have in mind. When we think of the cynical uses of empathy, it’s the other two that can be twisted in the service of pure self-interest. 

tania_lombrozo's picture
Professor of Psychology, UC Berkeley

At the heart of scientific thinking is the systematic evaluation of alternative possibilities. The idea is so foundational that it’s woven into the practice of science itself.  

Consider a few examples.  

Statistical hypothesis testing is all about ruling out alternatives. With a null hypothesis test, one evaluates the possibility that a result was due to chance alone. Randomized controlled trials, the gold standard for drawing causal conclusions, are powerful precisely because they rule out alternatives: they diminish the plausibility of alternative explanations for a correlation between treatment and effect. And in science classes and laboratories across the globe, students are trained to generate alternative explanations for every observationan exercise that peer reviewers take on as a professional obligation.  

The systematic evaluation of alternative possibilities has deep roots in the origins of science. In the 17th century, Francis Bacon wrote about the special role of instantia crucis“crucial instances, in guiding the intellect towards the true causes of nature by supporting one possibility over stated alternatives. Soon after, Robert Boyle introduced the experimentum crucis, or “crucial experiment”; a term subsequently used by Robert Hooke and Isaac Newton. A crucial experiment is a decisive test between rival hypotheses: a way to differentiate possibilities. (More than two centuries later, Pierre Duhem would reject the crucial experimentbut not because it involves evaluating alternative possibilitiesHe rejected crucial experiments because the alternative possibilities that they differentiate are tofew: there are always additional hypotheses available for amendment, addition, or rejection.) 

The systematic evaluation of alternative possibilities is a hallmark of scientific thinking, but it isn’t restricted to science. To arrive at the truth (in science or beyond), we generate multiple hypotheses and methodically evaluate how they fair against reason and empirical observationWe can’t learn without entertaining the possibility that our current beliefs are wrong or incomplete, and we can’t seek diagnostic evidence unless we specify the alternatives. Evaluating alternative possibilities is a basic feature of human thinkinga feature that science has successfully refined 

Within psychology, prompting people to consider alternative possibilities is recognized as a strategy for debiasing judgments. When prompted to consider alternatives (and in particular, to “consider the opposite” of a possibility under evaluation), people question assumptions and recalibrate beliefs. They recognize that an initial thought was misguided, a first impression uncharitable, a plan unrealistic. That such a prompt is effective suggests that in its absence, people don’t reliably consider the alternative possibilities that they should. Yet the basis for doing so was in their heads all alongan untapped potential. 

Evaluating alternative possibilities ought to be better known because it’s a tool for better thinking. It’s a tool that doesn’t require fancy training or fancy equipment (beyond the fancy equipment we already contain in our heads). What it does require is willingness to confront uncertainty, and boldly exploring the space of discarded or unformulated alternatives. That’s a kind of bravery that scientists should admire.

richard_h_thaler's picture
Father of Behavioral Economics; Recipient, 2017 Nobel Memorial Prize in Economic Science; Director, Center for Decision Research, University of Chicago Graduate School of Business; Author, Misbehaving

Before a major decision is taken, say to launch a new line of business, write a book, or form a new alliance, those familiar with the details of the proposal are given an assignment. Assume we are at some time in the future when the plan has been implemented, and the outcome was a disaster. Write a brief history of that disaster. 

Applied psychologist Gary Klein came up with “The Premortem, which was later written about by Daniel KahnemanOf course we are all too familiar with the more common postmortem that typically follows any disaster, along with the accompanying finger pointing. Such postmortems inevitably suffer from hindsight bias, also known as Monday-morning quarterbacking, in which everyone remembers thinking that the disaster was almost inevitable. As I often heard Amos Tversky say, “the handwriting may have been written on the wall all along. The question is: was the ink invisible?”  

There are two reasons why premortems might help avert disasters. (I say might because I know of no systematic study of their use. Organizations rarely allow such internal decision making to be observed and recorded.) First, explicitly going through this exercise can overcome the natural organizational tendencies toward groupthink and overconfidence. A devil’s advocate is unpopular anywhere. The premortem procedure gives cover to a cowardly skeptic who otherwise might not speak up. After all, the entire point of the exercise is to think of reasons why the project failed. Who can be blamed for thinking of some unforeseen problem that would otherwise be overlooked in the excitement that usually accompanies any new venture? 

The second reason a premortem can work is subtle. Starting the exercise by assuming the project has failed, and now thinking of why that might have happened creates the illusion of certainty, at least hypothetically. Laboratory research shows that by asking why did it fail rather than why might it fail, gets the creative juices flowing. (The same principle can work in finding solutions to tough problems. Assume the problem has been solved, and then ask, how did it happen? Try it!) 

An example illustrates how this can work. Suppose a couple years ago an airline CEO invited top management to conduct a premortem on this hypothetical disaster: All of our airline’s flights around the world have been cancelled for two straight days. Why? Of course, many will immediately think of some act of terrorism. But real progress will be made by thinking of much more mundane explanations. Suppose someone timidly suggests that the cause was the reservation system crashed and the backup system did not work properly.  

Had this exercise been conducted, it might have prevented a disaster for a major airline that cancelled nearly 2000 flights over a three-day period. During much of that time, passengers could not get any information because the reservation system was down. What caused this fiasco? A power surge blew a transformer and critical systems and network equipment didn’t switch over to backups properly. This havoc was all initiated by the equivalent of blowing a fuse.  

This episode was bad, but many companies that were once household names and now no longer exist might still be thriving if they had conducted a premortum with the question being: It is three years from now and we are on the verge of bankruptcy. How did this happen? 

And, how many wars might not have been started if someone had first asked: We lost. How? 

buddhini_samarasinghe's picture
Molecular Biologist

The eradication of smallpox was one of the most significant achievements of modern medicine. It was possible due to an effective vaccine, coupled with global vaccination programs. Theoretically, it is possible to eradicate other diseases such as measles or polio in a similar manner; if enough of the global population could be vaccinated, then these diseases would cease to exist. We have come tantalizingly close to eradication in some cases: In 2000, the Center for Disease Control and Prevention declared that measles had been eliminated from the United States. Sixteen years later, the Pan American Health Organization announced that measles had been eradicated from the Americas. Polio is now endemic in only three countries in the world. Infectious diseases that routinely killed young children are now preventable thanks to childhood vaccination programs. Yet despite these milestones, there have been several outbreaks of vaccine-preventable diseases in recent times. How can this be? 

A significant reason for this unfortunate resurgence is that many people, despite evidence to the contrary, view vaccine efficacy and safety as a matter of opinion, rather than one based on scientific fact. This has serious consequences, not just for individuals who choose to avoid vaccines, but also for public health initiatives as a whole.  

To be effective, vaccination strategies for contagious diseases rely on a scientific concept known as "herd immunity." Herd immunity can be considered a protective shield that prevents unvaccinated people from coming into contact with the disease, thus stopping its spread. Herd immunity is particularly important for people who cannot be vaccinated, including infants, pregnant women, or immunocompromised individuals. The required level of immunization to attain benefit from herd immunity varies for each disease, and is calculated based on the infectious agent’s reproductive number—how many people each infected person goes on to infect, on average. For measles, which can cause about eighteen secondary cases for each infected person, the required level of immunization for herd immunity is about 95%. In other words, at least 95% of the entire population must be immune to prevent the spread of measles following an infection. Low vaccination levels are failing to provide protection through herd immunity, stripping one of the greatest tools in public health of its power.  

Vaccines work by imitating an infection, thereby helping the body’s own defences to be prepared in case of an actual infection. Unfortunately, no vaccine is 100% effective, and the immunity provided by vaccines can wane over time; these facts are often cited by anti-vaccine activists in an effort to discredit the entire concept of vaccination. But even waning immunity is better than no immunity; for example, the smallpox vaccine was generally thought to be effective for seven to ten years, but a recent analysis showed that even individuals who were vaccinated up to thirty-five years ago would still have substantial resistance to a smallpox infection. It is undeniable that vaccines can still offer protection in the event of an outbreak, and herd immunity helps prevent the spread of disease.  

The concept of herd immunity also applies to the annual flu vaccine. Unlike vaccines for measles or polio, the flu vaccine needs to be given every year because the influenza virus evolves rapidly. And because the influenza virus is not as infectious as measles, only half the population needs to be immune to prevent the spread of disease. Herd immunity protects us from the common circulating variations of the flu while the annual vaccine will protect us from the new versions that have "escapedthe existing immune response. Without a vaccine and herd immunity, a far greater number of people would be infected each year with the flu.  

Vaccines are one of the greatest successes of public health. They have helped us conquer diseases such as smallpox and polio, helping us live longer, healthier, more productive lives. And yet, because of decreasing levels of vaccination, the threshold required to provide protection through herd immunity becomes unattainable; as a result, previously eradicated diseases are starting to reappear. A vaccine can be seen as an act of individual responsibility, but it has a tremendous collective impact. Vaccination on a large scale not only prevents disease in an individual, but also helps protect the vulnerable in a population. To convince the general public of its necessity and encourage more people to get vaccinated, the concept of herd immunity must be more widely understood. 

martin_lercher's picture
Professor of Computational Cell Biology at Heinrich Heine University, Düsseldorf; Co-author (with Itai Yanai), The Society of Genes

Life, as we know it, requires some degree of stability—both internally and externally. Consider a bacterium like E. coli that needs to maintain its internal copper concentrations within a narrow range: too much copper would kill the cell, while too little would impede important metabolic functions that rely on copper atoms as catalytic centers of enzymes. Keeping copper concentrations within the required range—copper homeostasis—is achieved through a negative feedback loop: the bacterium possesses internal sensors that react to sub-optimal copper levels by changing the production rate of proteins that pump copper out of the cell. This feedback system has its limits, though, and most bacteria succumb to too much copper in their environment—storing water in copper containers is an age-old strategy to keep it fresh.  

Human cells not only need elaborate systems to achieve internal homeostasis of many types of molecules. In addition, they also require a precisely tuned environment. Our cells can count on a working temperature of close to 37°C, measured by thermometers in the brain and maintained through behavior (e.g., wardrobe adjustments) as well as through the regulation of blood flow to the limbs and through sweating. Our cells can also rely on a constant supply of nutrients through the blood stream, including glucose, measured in the pancreas and regulated through insulin secretion, and oxygen, measured in the major blood vessels and the kidneys and maintained through adjustments of the activity of breathing muscles and the production of red blood cells. Again, homeostasis is achieved through negative feedback loops: in response to measured deviations from a desired level, our body initiates responses that move us back toward the target value. 

That homeostasis plays a major role in human health was already recognized by the ancient Greek philosophers. Hippocrates believed that health represented a harmonious balance of the “elements” that made up the human body, while disease was a state of systematic imbalance. For many important diseases, this view indeed provides an accurate description. In type 1 diabetes, for example, the pancreas cells responsible for the blood glucose level measurements are destroyed, and the homeostasis system breaks down. Chronic diseases, on the other hand, are often initially compensated by homeostatic systems; e.g., anemia caused by an accelerated breakdown of red blood cells can be compensated through an increased production of these cells, as long as the body possesses enough raw materials for this enterprise. 

Complex systems can hardly be stable without at least some level of homeostasis. The earth’s biosphere is a prime example. Surface temperatures and atmospheric CO2 levels are both affected by biological activities. Higher atmospheric CO2 partial pressure leads to increased plant growth, causing an increased consumption of CO2 and thus maintaining homeostasis. Higher temperatures cause increased phytoplankton growth in the oceans that produces airborne gases and organic matter seeding cloud dropletsmore and denser clouds, in turn, lead to an increased reflection of sunlight back into space and thus contribute to temperature homeostasis. These systems also have their limits, though; like an E. coli bacterium with too much copper, our planet’s homeostasis systems on their own may be unable to overcome the current onslaught of human activities on global temperatures and CO2 levels.

margaret_levi's picture
Sara Miller McCune Director, Center For Advanced Study in Behavioral Sciences, professor, Stanford University; Jere L. Bacharach Professor Emerita of International Studies, University of Washington

For societies to survive and thrive, some significant proportion of their members must engage in reciprocal altruism. All sorts of animals, including humans, will pay high individual costs to provide benefits for a non-intimate other. Indeed, this kind of altruism plays a critical role in producing cooperative cultures that improve a group’s welfare, survival, and fitness. 

The initial formulation of reciprocal altruism focused on a tit-for-tat strategy in which the altruist expected a cooperative response from the recipient. Game theorists posit an almost immediate return (albeit iterated), but evolutionary biologists, economists, anthropologists, and psychologists tend to be more concerned with returns over time to the individual or, more interestingly, to the collective.  

Evidence is strong that for many human reciprocal altruists the anticipated repayment is not necessarily for the person who makes the initial sacrifice or even for their family members. By creating a culture of cooperation, the expectation is that sufficient others will engage in altruistic acts as needed to ensure the well being of those within the boundaries of the given community. The return to such long-sighted reciprocal altruists is the establishment of norms of cooperation that endure beyond the lifetime of any particular altruist. Gift-exchange relationships documented by anthropologists are mechanisms for redistribution to ensure group stability; so are institutionalized philanthropy and welfare systems in modern economies.   

At issue is how giving norms evolve and help preserve a group. Reciprocal altruism—be it with immediate or long-term expectations—offers a model of appropriate behavior, but, equally importantly, it sets in motion a process of reciprocity that defines expectations of those in the society. If the norms become strong enough, those who deviate will be subject to punishment—internal in the form of shame and external in the form of penalties ranging from verbal reprimand, torture or confinement, and banishment from the group.    

Reciprocal altruism helps us understand the creation of ethics and norms in a society, but we still need to more clearly understand what initiates and sustains altruistic cooperation over time. Why would anyone be altruistic in the first place? Without some individually based motivations, far too few would engage in cooperative action. It may be that a few highly moralistic individuals are key; once there is a first mover willing to pay the price, others will follow as the advantages become clearer or the costs they must bear are lowered.  

Other accounts suggest that giving can be motived by a reasonable expectation of reciprocity or rewards over time. Other factors also support reciprocal altruism, such as the positive emotions that can surround the act of giving, the lesson Scrooge learned. Most likely, there is a combination of complementary motivations: Both Gandhi and Martin Luther King were undoubtedly driven by moral principles and outrage at the injustices they perceived, but they also gained adulation and fame. Less famous examples abound of sacrifice, charity, and costly cooperation—some demanding recognition, some not.     

For long-sighted reciprocal altruism to be sustained in a society ultimately requires an enduring framework for establishing principles and ethics. Reciprocal altruism is reinforced by a culture that has norms and rules for behavior, makes punishment legitimate for deviations, and teaches its members its particular ethics of responsibility and fairness. If the organizational framework of the culture designs appropriate incentives and evokes relevant motivations, it will ensure a sufficient number of reciprocal altruists for survival of the culture and its ethics.  

Long-sighted reciprocal altruism is key to human cooperation and to the development of societies in which people take care of each other. However, there is huge variation in who counts in the relevant population and what they should receive as gifts and when. The existence of reciprocal altruism does not arbitrate these questions. Indeed, the expectation of reciprocity can both reduce and even undermine altruism. It may limit gift giving only to the in-group where such obligations exist. Perhaps if we stay only in the realm of group fitness (or, for that matter, tribalism), such behavior might still be considered ethical. But if we are trying to build an enduring and encompassing ethical society, tight boundaries around deserving beneficiaries of altruistic acts becomes problematic. If we accept such boundaries, we are quickly in the realm of wars and terrorism in which some populations are considered non-human or, at least, non-deserving of beneficence.   

The concept of reciprocal altruism allows us to explore what it means to be human and to live in a humane society. Recognition of the significance of reciprocal altruism for the survival of a culture makes us aware of how dependent we are on each other. Sacrifices and giving, the stuff of altruism, are necessary ingredients for human cooperation, which itself is the basis of effective and thriving societies. 

robert_kurzban's picture
Psychologist, UPenn; Director, Penn Laboratory for Experimental Evolutionary Psychology (PLEEP); Author, Why Everyone (Else) is a Hypocrite

The intuitively clear effects of a tariff—a tax on goods or services entering a country—are that it helps domestic producers of the good or service and harms foreign producers. An American tax on Chinese tires, say, transparently helps American tire producers because the prices of the tires of their foreign competitors will be higher, allowing American producers to compete more easily.  

Those are the intuitively clear effects. These effects—advantages for domestic firms, disadvantages for foreign firms—are useful for politicians to emphasize when they are contemplating tariffs and other forms of protectionism because they appeal to nationalistic, competitive intuitions.  

However, not all ideas surround international trade are so intuitive. Consider these remarks by economist Paul Krugman: “The idea of comparative advantage—with its implication that trade between two nations normally raises the real incomes of both—is, like evolution via natural selection, a concept that seems simple and compelling to those who understand it. Yet anyone who becomes involved in discussions of international trade beyond the narrow circle of academic economists quickly realizes that it must be, in some sense, a very difficult concept indeed.” 

Comparative advantage is an important idea, but is indeed difficult to grasp. To try to illustrate the point, consider the economist’s device of simplifying matters to see the underlying point. You and I are on a desert island, harvesting coconuts and catching fish to survive. You need one hour to harvest one coconut and two hours to catch one fish. (For this example, I’ll assume you can meaningfully divide one fish [or one coconut] into fractions.) I, being old and no fisherman, am less efficient than you at both activities; I require two hours per coconut and six hours per fish. It might seem that you need not trade with me; after all, you’re better than I am at both fishing and harvesting coconuts. But the economist David Ricardo famously showed this isn’t so.  

Consider two cases. In the first, we don’t trade. You produce, say, four coconuts and two fish during an eight-hour workday. I produce just one of each.  

For the second case, suppose we specialize and trade, and you agree to give me a fish in exchange for 2.5 coconuts. (This is a good deal for both of us. For you, catching one less fish gets you two hours or two coconuts; 2.5 is better. For me, that fish saves me six hours, or three coconuts.)  Now you produce one extra fish and give it to me in trade, leaving you with two fish and 4.5 coconuts. For ease of exposition, let’s say I only harvest coconuts, produce four and give you 2.5 of them, leaving me 1.5 coconuts and the fish I got from you. In this second case, you have just as many fish and an extra half coconut; I also have the same number of fish—one—plus the half coconut. Through the magic of trade, the world is one coconut better off, split between the two of us.  

The lesson is that even though I produce both goods less efficiently than you, we are still both made better off when we specialize in the good for which we have a comparative advantage and then trade. This is an argument for a role for governments in facilitating, rather than inhibiting, specialization and trade. It should be clear, for instance, that if the island government forced me to pay them an extra coconut every time I purchased one of your fish—driving the coconut price to me from 2.5 to 3.5 coconuts—I wind up with only half a coconut and a fish after the trade, and so would prefer to revert to the first case in which I split my time. In turn, you lose your trading partner, and similarly revert to the previous case.  

These gains reaped from world trade are less intuitive than the patriotic gains reaped by those of domestic firms protected—and therefore helped—by tariffs and other trade barriers. For this reason, given the outsize role that questions surrounding world trade play in current political discourse, the notion of comparative advantage ought to be more widely known.  

john_tooby's picture
Founder of field of Evolutionary Psychology; Co-director, Center for Evolutionary Psychology, Professor of Anthropology, UC Santa Barbara

Every human—not excepting scientists—bears the whole stamp of the human condition. This includes evolved neural programs specialized for navigating the world of coalitions—teams, not groups. (Although the concept of coalitional instincts has emerged over recent decades, there is no mutually-agreed-upon term for this concept yet.) These programs enable us and induce us to form, maintain, join, support, recognize, defend, defect from, factionalize, exploit, resist, subordinate, distrust, dislike, oppose, and attack coalitions. Coalitions are sets of individuals interpreted by their members and/or by others as sharing a common abstract identity (including propensities to act as a unit, to defend joint interests, and to have shared mental states and other properties of a single human agent, such as status and prerogatives).  

Why do we see the world this way? Most species do not and cannot. Even those that have linear hierarchies do not. Among elephant seals, for example, an alpha can reproductively exclude other males, even though beta and gamma are physically capable of beating alpha—if only they could cognitively coordinate. The fitness payoff is enormous for solving the thorny array of cognitive and motivational computational problems inherent in acting in groups: Two can beat one, three can beat two, and so on, propelling an arms race of numbers, effective mobilization, coordination, and cohesion.  

Ancestrally, evolving the neural code to crack these problems supercharged the ability to successfully compete for access to reproductively limiting resources. Fatefully, we are descended solely from those better equipped with coalitional instincts. In this new world, power shifted from solitary alphas to the effectively coordinated down-alphabetgiving rise to a new, larger landscape of political threat and opportunity: rival groups or factions expanding at your expense or shrinking as a result of your dominance.  

And so a daunting new augmented reality was neurally kindled, overlying the older individual one. It is important to realize that this reality is constructed by and runs on our coalitional programs and has no independent existence. You are a member of a coalition only if someone (such as you) interprets you as being one, and you are not if no one does. We project coalitions onto everything, even where they have no place, such as in science. We are identity-crazed. 

The primary function that drove the evolution of coalitions is the amplification of the power of its members in conflicts with non-members. This function explains a number of otherwise puzzling phenomena. For example, ancestrally, if you had no coalition you were nakedly at the mercy of everyone else, so the instinct to belong to a coalition has urgency, preexisting and superseding any policy-driven basis for membership. This is why group beliefs are free to be so weird. Since coalitional programs evolved to promote the self-interest of the coalition’s membership (in dominance, status, legitimacy, resources, moral force, etc.), even coalitions whose organizing ideology originates (ostensibly) to promote human welfare often slide into the most extreme forms of oppression, in complete contradiction to the putative values of the group. Indeed, morally wrong-footing rivals is one point of ideology, and once everyone agrees on something (slavery is wrong) it ceases to be a significant moral issue because it no longer shows local rivals in a bad light. Many argue that there are more slaves in the world today than in the 19th century. Yet because one’s political rivals cannot be delegitimized by being on the wrong side of slavery, few care to be active abolitionists anymore, compared to being, say, speech police. 

Moreover, to earn membership in a group you must send signals that clearly indicate that you differentially support it, compared to rival groups. Hence, optimal weighting of beliefs and communications in the individual mind will make it feel good to think and express content conforming to and flattering to one’s group’s shared beliefs and to attack and misrepresent rival groups. The more biased away from neutral truth, the better the communication functions to affirm coalitional identity, generating polarization in excess of actual policy disagreements. Communications of practical and functional truths are generally useless as differential signals, because any honest person might say them regardless of coalitional loyalty. In contrast, unusual, exaggerated beliefs—such as supernatural beliefs (e.g., god is three persons but also one person), alarmism, conspiracies, or hyperbolic comparisons—are unlikely to be said except as expressive of identity, because there is no external reality to motivate nonmembers to speak absurdities. 

This raises a problem for scientists: Coalition-mindedness makes everyone, including scientists, far stupider in coalitional collectivities than as individuals. Paradoxically, a political party united by supernatural beliefs can revise its beliefs about economics or climate without revisers being bad coalition members. But people whose coalitional membership is constituted by their shared adherence to “rational,” scientific propositions have a problem when—as is generally the case—new information arises which requires belief revision. To question or disagree with coalitional precepts, even for rational reasons, makes one a bad and immoral coalition member—at risk of losing job offers, one's friends, and one's cherished group identity. This freezes belief revision.  

Forming coalitions around scientific or factual questions is disastrous, because it pits our urge for scientific truth-seeking against the nearly insuperable human appetite to be a good coalition member. Once scientific propositions are moralized, the scientific process is wounded, often fatally.  No one is behaving either ethically or scientifically who does not make the best case possible for rival theories with which one disagrees. 

michael_hochberg's picture
Evolutionist, CNRS, Santa Fe Institute, Institute for Advanced Study Toulouse

Herbert Simon contributed importantly to our understanding of a number of problems in a wide array of disciplines, one which is the notion of achievement. Achievement depends not only on ability and the problem at hand (including information available and the environment), but also one’s motivation and targets. Two of Simon’s many insights were how much effort it takes to become an “expert” at an endeavor requiring a special skill (using chess as a model, the answer is—very approximately—10,000 hours), and how objectives are accomplished: whether individuals maximize, optimize, or rather accept or even seek an apparently lesser outcome, that is, satisfice.  

Satisficing recognizes constraints on time, capacity and information, and the risk and consequences of failure. In Simon’s own words from his 1956 paper on the subject: “…the organism, like those of the real world, has neither the senses nor the wits to discover an 'optimal' path—even assuming the concept of optimal to be clearly defined—we are concerned only with finding a choice mechanism that will lead it to pursue a 'satisficing' path, a path that will permit satisfaction at some specified level of all of its needs.”  

Will a marginal increase in effort result in an acceptable increase in achievement? It is evidently unusual to be so calculating when deciding how to commit to an endeavor, competitive or not. In a non-competitive task, such as reading a book, we may be time limited or constrained by background knowledge. Beyond completing the task, there is no objective measure of achievement. At the other extremefor example competing in the 100m dashit’s not only about victory and performance relative to other contestants in the race itself, but also outcomes relative to past and even future racesEffort is maximizefrom start to finish 

Obviously, many endeavors are more complex than these and do not easily lend to questioning the default objectives of maximization or optimization. Evolutionary thinking is a useful framework in this regard for gaining a richer understanding of the processes at work. Consider the utility of running speed for a predator (me) trying to catch a prey item. I could muster that extra effort to improve on performance, but at what costFor example, if were to run as fast as I possibly could after a prey itemthen, should I fail, not only would I miss my dinner, but might also need to wait to recover the energy to run again. In running at full speed over an uneven terrain, I would also risk injury, and could become dinner for another predator. 

But, endeavors not only have risks, they also have constraints, or what evolutionists refer to as “tradeoffs.” Tradeoffs, such as between running speed and endurance, may appear simple, but imagine trying to adapt running speed to preserve endurance so as to catch any prey item seen, regardless of prospects of actually catching and subduing it. It’s likely that time and effort are wasted, resulting in insufficient numbers of prey caught overall; but it is also possible that more are caught than needed, meaning less time spent on other important tasks. A satisficer only chases after enough of the easier to catch prey to satisfy basic needs and so can spend more time on other useful tasks.  

I believe that satisficing should be more widely known, because it is a different way of looking at nature in general, and on certain facets of human endeavorMore, higher, faster is better only up to a point, and perhaps only in a small number of contexts. Indeed, it is a common misconception that natural selection optimizes or maximizes whatever it touches—the evolutionary mantra “survival of the fittest” can be misleading. Rather, the evolutionary process tends to favor more fit genetic alternatives, and the capacity to perform will vary between individuals. Winners will sometimes be losers and vice versa. Most finish somewhere in-between, and for some, this is success. 

We humans will increasingly satisfice because our environments are becoming ever richer, more complex and more challenging to process. Some may fear that satisficing will create a world of laziness, apathy, sub-standard performance, and economic stagnationOn the contrary, if norms in satisficing embody certain standardsthen satisficing could lead to a ratcheting-up of individual wellbeingsocial stability, and contribute to sustainability.

daniel_hook's picture
CEO, Digital Science

When Dirac formulated the postulates of quantum theory, he required Hermiticity to be the fundamental symmetry for his equations. For Dirac, the requirement of Hermiticity was the mathematical device that he needed to ensure that all predictions for the outcomes of real-world measurements of quantum systems resulted in a real number. This is important since we only observe real outcomes in actual experimental observations. Dirac’s choice of Hermiticity as the fundamental symmetry of quantum theory was not seriously challenged for around seventy years.  

Hermiticity is a subtle and abstract symmetry that is mathematical in its origin. Broadly speaking, the requirement of Hermiticity imposes a boundary on a system. This is an idealization in which a system is isolated from any surrounding environment (and hence cannot be measured). While this gives a tractable mathematical framework for quantum theory, it is an unphysical requirement since all systems interact with their environment and if we wish to measure a system then such an interaction is required. 

In 1998, Carl Bender and Stefan Boettcher wrote a paper exploring the replacement of Hermiticity with another symmetry. They showed that they could replace the mathematically motivated symmetry of Dirac by a physically motivated symmetry preserving the reality of experimental outcomes. Their new theory, however, had interesting new features—it was not a like-for-like replacement. 

The underlying symmetry that Bender and Boettcher found was what they called “PT symmetry.” The symmetry here is geometric in nature and is hence closer to physics than is Hermiticity. The “P” stands for “parity” symmetry, sometimes called mirror symmetry. If a system respects “P” symmetry, then the evolution of the system would not change for a spatially reflected version of the system. The “T” stands for “time-reversal.” Time-reversal symmetry is just as it sounds—a physical system respecting this symmetry would evolve in the same way regardless whether time runs forward or backward. Some systems do individually exhibit P and T symmetries, but it is the combination of the two that seems to be fundamental to quantum theory.  

Instead of describing a system in isolation, PT symmetry describes a system that is in balance with its environment. Energy may flow in and out of the system, and hence measurements can be made within the theoretical framework of a system described by a PT symmetry. The requirement is that the same amount of energy that flows in must also flow out of the system.   

This subtler definition of a system’s relationship with its environment, provided by PT symmetry, has made it possible to describe a much wider class of systems in mathematical terms. This has led not only to an enhanced understanding of these systems but also to experimental results that support the choice of PT as the underlying symmetry in quantum mechanics. Several physical models for specific systems that had previously been studied and rejected, because they did not respect Hermiticity, have been re-examined and found to be PT symmetric.  

It is remarkable that the study of PT symmetry has progressed so rapidly. For many areas of theoretical physics, the time-lag between theory and experiment is now on the order of several decades. We may never be able to fully test string theory and experimental verification of the fifty-year-old theory of supersymmetry remains elusive. 

In the eighteen years since Bender and Boettcher’s 1998 paper, experimentalists have created PT lasers, PT superconducting wires, PT NMR and PT diffusion experiments to mention just a few validations of their theory.  As PT symmetry has matured, it has inspired the creation of exotic metamaterials that have properties that allow us to control light in new ways. The academic community, initially skeptical of such a fundamental change in quantum theory, has warmed to the idea of PT symmetry. Over 200 researchers from around the world have published scholarly papers on PT symmetry. The literature now extends to more than 2000 articles, many in top journals such as Nature, Science and Physical Review Letters.  

The future is bright for PT-symmetric quantum mechanics, but there is still work to be done. Many of the experiments mentioned have quantum mechanical aspects but are not full verifications of PT quantum mechanics. Nevertheless, existing experiments are already leading to exciting results. PT is a hot topic in the optics and graphene communities and the idea of creating a computer based on optical rather than electronic principles has recently been suggested. At the beginning of the 21st century, we are finding a new understanding of quantum theory that has the potential to unlock new technologies in the same way that semi-conductor physics was unlocked by the rise of quantum mechanics one hundred years ago.

michael_i_norton's picture
Harold M. Brierley Professor of Business Administration, Director of Research, Harvard Business School; Co-author (with Elizabeth Dunn), Happy Money

Arguments over which species makes for the best pet are deeply unproductive: Clearly, those with different views are deeply misguided. Turtles, say, are easy: they sit around, chewing slowly. Dogs, say, are difficult: they run around, chewing rapidly. But there is a hidden benefit to some choices, unbeknownst to their owners. It’s the fact that turtles are passive and dogs active that’s the key. The Dog, it turns out, needs to go for walks—and so dog owners get a little bit of exercise every day. And, what is the dog’s absolute favorite activity in the world? Meeting (and sniffing) other dogs, who happen to be attached to humans via their leashes—and so dog owners get a little of socializing every day as well. Research shows that getting a little exercise and chatting with strangers contributes to our well-being. Now, we could just decide: I’m going to go for a walk and chat with new people today. And repeat that to ourselves as we press play on another episode of Breaking Bad. But because dogs importune us with, well, puppy dog eyes, they prompt us in a way that we are unable to prompt ourselves.   

Dogs beat turtles because they serve as commitment devices—decisions we make today that bind us to be the kind of person we want to be tomorrow. (The most famous example is Odysseus tying himself to the mast to resist the lure of the sirens—he wanted to hear today, but not be shipwrecked tomorrow.) 

Researchers have documented a wide array of effective commitment devices. In one study, would-be exercisers were granted free access to audio versions of trashy novels—the kind they might usually feel guilty about. The commitment device? They were only allowed to listen while exercising at the gym, which increased their subsequent physical activity. In another, shoppers who qualified for a 25% discount on their groceries were given the chance to make their discount contingent on committing to increase their purchase of healthy food by 5%; not only did many commit to “gamble” their discount, but the gamble paid off in actual healthier purchasing. And commitments can even seem irrational. People will agree to sign up for savings accounts that do not allow any money to be withdrawn, for any reason, for long periods of time; they will even sign up for accounts that not only offer zero interest, but charge massive penalties for any withdrawals. Committing to such accounts makes little sense economically, but perfect sense psychologically: People are seeking commitment devices to bind themselves to save. 

The decision of which pet to choose seems trivial in comparison to health and finances, but it suggests the broad applicability of commitment devices in everyday life. Thinking of life as a series of commitment devices, of not just wanting to be your ideal self tomorrow but designing your environment to commit yourself to it, is a critical insight from social science. In a sense, most relationships can be seen as commitment devices. Siblings, for example, commit us to experiencing decades-long relationships (for better and for worse, and whether we like it or not). Want to better understand different political viewpoints? You could pretend you are going to read Ayn Rand and Peter Singer—or you can drag yourself to Thanksgiving with the extended family. Want to spend more time helping others? You could sign up to volunteer, and then never show—or you can have a baby, whose importuning skills trump even puppies. And finally, want to avoid pointless arguments? This one isn’t a commitment device, just advice: never discuss pet preferences. 

matthew_o_jackson's picture
Professor of Economics, Stanford University, Santa Fe Institute, CIFAR

No, homophily has nothing to do with sexual orientation. In the 1950s a pair of sociologists, Paul Lazarsfeld and Robert Merton, coined the term homophily to refer to the pervasive tendency of humans to associate with others who are similar to themselves.    

Even if you do not know homophily by name, it is something you have experienced throughout your life. In whatever elementary school you went to, in any part of the world, girls tended to be friends with girls, and boys with boys. If you went to a high school that had people of more than one ethnicity, then you saw it there. Yes, you may have been friends with someone of another ethnicity, but such friendships are the exception rather than the rule. We see strong homophily by age, ethnicity, language, religion, profession, caste, and income level. 

Homophily is not only instinctual—just watch people mingle at any large social event in which they are all strangers—it also makes sense for many reasons. New parents learn from talking with other new parents, and help take care of each other’s children. People of the same religion share beliefs, customs, holidays, and norms of behavior.  By the very nature of any workplace, you will spend most of your day interacting with people in the same profession and often in the same sub-field.  Homophily also helps us navigate our networks of connections. If you need to ask about a doctor’s reputation, which one of your friends would you ask? Someone in the healthcare industry, of course, as they would be the most likely of your friends to know the doctor, or know someone who knows the doctor. Without homophily you would have no idea of whom to ask. 

As simple and familiar as it is, homophily is very much a scientific concept: It is measurable and has predictable consequences. In fact, it is so ubiquitous, that it should be thought of as a fundamental scientific concept. But, it is the darker side of homophily that makes it such an important scientific concept.   

As the world struggles with inequality and immobility, we can debate how much a role accumulation of capital plays, or political regimes, but we miss a primary constraint on mobility if we ignore homophily. To understand why many American youths join gangs, and so many end up shot or in jail before their twenty-fifth birthday, one only has to look at what they observe and experience from a young age. If we want to understand why universities like Stanford, Harvard, and MIT have more than twenty times more students from the top quarter of the income distribution than the bottom quarter of the distribution, homophily is a big part of the answer. High school students in poor neighborhoods often have little idea of the financial aid available to them, or what the benefits to higher education really are, or even what higher education really is. By the time they talk to a high school counselor who might have a few answers, it is much too late. Homophily affects the way that their parents have raised them, the culture that they experience, the role models they see, the beliefs that they have, the opportunities that come their way, and ultimately the expectations they have for their lives.    

Although we are all familiar with homophily, thinking of it as a scientific, measurable phenomenon, may help it become a bigger part of the discourse on how we can increase mobility and decrease inequality around the world. Solving such problems requires understanding how persistent segregation by income and ethnicity prevents information and opportunities from reaching those who need them most.   Homophily lies at the root of many social and economic problems, and understanding it can help us better address the many issues that societies around the globe face, from inequality and immobility, to political polarization.  

jessica_flack's picture
Professor, Director of the Collective Computation Group, Santa Fe Institute

In physics a fine-grained description of a system is a detailed description of its microscopic behavior. A coarse-grained description is one in which some of this fine detail has been smoothed over. 

Coarse-graining is at the core of the second law of thermodynamics, which states that the entropy of the universe is increasing. As entropy, or randomness, increases there is a loss of structure. This simply means that some of the information we originally had about the system has become no longer useful for making predictions about the behavior of a system as a whole. To make this more concrete, think about temperature. 

Temperature is the average speed of particles in a system. Temperature is a coarse-grained representation of all of the particles’ behavior–the particles in aggregate. When we know the temperature we can use it to predict the system’s future state better than we could if we actually measured the speed of individual particles. This is why coarse-graining is so important–it is incredibly useful. It gives us what is called an effective theory. An effective theory allows us to model the behavior of a system without specifying all of the underlying causes that lead to system state changes. 

It is important to recognize that a critical property of a coarse-grained description is that it is “true” to the system, meaning that it is a reduction or simplification of the actual microscopic details. When we give a coarse-grained description we do not introduce any outside information. We do not add anything that isn’t already in the details. This “lossy but true” property is one factor that distinguishes coarse-graining from other types of abstraction. 

A second property of coarse-graining is that it involves integrating over component behavior. An average is a simple example but more complicated computations are also possible. 

Normally when we talk of coarse-graining, we mean coarse-grainings that we as scientists impose on the system to find compact descriptions of system behavior sufficient for good prediction. In other words, coarse-graining helps the scientist identify the relevant regularities for explaining system behavior. 

However, we can also ask how adaptive systems identify (in evolutionary, developmental, or learning time) regularities and build effective theories to guide decision making and behavior. Coarse-graining is one kind of inference mechanism that adaptive systems can use to build effective theories. To distinguish coarse-graining in nature from coarse-graining by scientists, we refer to coarse-graining in nature as endogenous coarse-graining. 

Because adaptive systems are imperfect information processors, coarse-graining in nature is unlikely to be a perfect or “true” simplification of the microscopic details as it is the physics sense. It is also worth noting that coarse-graining in nature is complicated by the fact that in adaptive systems it is often a collective process performed by a large number of semi-independent components. One of many interesting questions is whether the subjectivity and error inherent in biological information processing can be overcome through collective coarse-graining.  

In my view two key questions for 21st-century biology are how nature coarse-grains and how the capacity for coarse-graining influences the quality of the effective theories that adaptive systems build to make predictions. Answering these questions might help us gain traction on some traditionally quite slippery philosophical questions. Among these, is downward causation “real” and are biological systems law-like?

itai_yanai's picture
Director, Institute for Computational Medicine; Professor, Biochemistry and Molecular Pharmacology, New York University School of Medicine; Co-author (with Martin Lercher), The Society of Genes

Cancer seems inscrutable. It has been variously described as a disease of the genome, a result of viral infection, a product of misbehaving cells, a change in metabolism, and cell signaling gone wrong. Like the eight blind people touching different parts of an elephant, these all indeed describe different aspects of cancer. But the elephant in the room is that cancer is evolution.   

Cancer is a form of evolution within our body, the “soma”cancer is somatic evolution. Take that spot on your arm as proof that some of your cells are different from others: some darker, some lighter. This difference is also heritable when one of your body’s cells divides into two daughter cells, encoded as a mutation in the cell’s DNA, perhaps caused by sun exposure. Much of somatic evolution is inconsequential. But some of the heritable variation within a human body may be of a kind that makes a more substantial change than color: it produces cells that divide faster, setting in motion a chain of events following from the inescapable logic of Darwin’s natural selection. The cells carrying such a mutation will become more popular in the body over time. This must happen as the criteria of natural selection have been met: heritable change providing an advantage over neighboring cells, in this case in the form of faster growth. 

But no single mutation can produce a cell that is cancerousi.e., one able to mount a threat to the well-being of the body. Similar to the evolution of a species, change occurs upon change, allowing a population to adapt sequentially. As the clones of faster-dividing cells amass, there is power in numbers, and it becomes probable for another random mutation to occur among them which further increases the proliferation. These mutations and their selection allow the cancer to adapt to its environment: to ignore the signaling of its neighbors to stop dividing, to change its metabolism to a quick and dirty form, to secure access to oxygen. Sometimes the process starts with an infection by a virusconsider this just another form of heritable variation as the viral genome becomes a part of the DNA in the cell it attacks. Evolution is rarely fast, and this is also true for somatic evolution. The development of a cancer typically takes many years, as the mutated cells acquire more and more mutational changes, each increasing their ability to outcompete the body’s other cells. When the cancer finally evolves the ability to invade other tissues, it becomes nearly unstoppable. 

Evolution is sometimes confused with progress. From the perspective of a cancer patient, somatic evolution certainly isn’t. Rather, as cancer develops, changes in the composition of the body’s gene pool occur: the very definition of evolution. The notion that cancer is evolution is not an analogy, but a matter of fact characterization of the process. It is humbling indeed that evolution is not only an ancient process that explains our existence on this planet, but is also constantly happening within our bodieswithin our soma.

kate_jeffery's picture
Professor of Behavioural Neuroscience, Dept. of Experimental Psychology, University College London

Quickly cool a piece of super-heated, liquid glass and a strange thing happens. The glass becomes hard, but very brittle—so much so that it may abruptly and startlingly shatter without warning. This is because the bonds between the molecules are under strain, and the cool temperature and low velocity means they cannot escape, as if caught in a negative equity trap with the neighbors from hell. And, like warring neighbors, eventually something gives way and the strain relieves itself catastrophically. Glass-makers avoid such catastrophes by “annealing” the glass, which means holding it for a long time at a high enough temperature that the molecules can move past each other but not too fast—in this way, the glass can find its way into a minimum-energy state where each molecule has had a chance to settle itself comfortably next to its neighbors with as little strain as possibleafter which it can be completely cooled without problems.  

Systems in which elements interact with their neighbors and settle into stable states are called attractors, and the stable states they settle into are called attractor states, or local minimaThe term “attractor” arises from the property that if the system finds itself near one of these states it will tend to be attracted towards it, like a marble rolling downhill into a hollow. If there are multiple hollows—multiple local minima—then the marble may settle into a nearby one that is not necessarily the lowest point it can reach. To find the “global minimum” the whole thing may need to be shaken up so that the marble can jiggle itself out of its suboptimal local minimum and try and find a better oneincluding eventually (hopefully) the global one. This jiggling, or injection of energy, is what annealing accomplishes, and the process of moving into progressively lower energy states is called gradient descent. 

Many natural systems show attractor-like dynamics. A murmuration of starlings, for example, produces aerial performances of such extraordinary, balletic synchrony that it seems like a vast, amorphous, purposeful organism, and yet the synchronized movements arise simply from the interactions between each bird and its nearest neighbors. Each flow of the flock in a given direction is a transient stable state, and periodic perturbations cause the flock to ruffle up and re-form in a new state, swooping and swirling across the sky. At a finer scale, brain scientists frequently recruit attractor dynamics to explain stable states in brain activity, such as the persistent firing of the neurons that signal which way you are facing, or where you are. Unlike glass particles or starlings, neurons do not physically move, but they express states of activity that influence the activity of their “neighbors” (neurons they are connected to) such that the activity of the whole network eventually stabilizes. Some theoreticians even think that memories might be attractor states—presenting a reminder of a memory is akin to placing the network near a local minimum, and the evolution of the system’s activity towards that minimum, via gradient descent, is analogous to retrieving the memory. 

Attractors also characterize aspects of human social organization. The problem of pairing everybody off so that the species can reproduce successfully is a problem of annealing. Each individual is trying to optimize constraints—they want the most attractive, productive partner but so do all their competitors, and so compromises need to be made —bonds are made and broken, made again and broken again, until each person (approximately speaking) has found a mate. Matching people to jobs is another annealing problem, and one that we haven’t solved yet—how to find a low-strain social organization in which each individual is matched to their ideal job? If this is done badly, and society settles into a strained local minimum in which some people are happy but large numbers of people are trapped in jobs they dislike with little chance of escape, then the only solution may be an annealing one—to inject energy into the system and shake it up so that it can find a better local minimum. This need to de-stabilize a system in order to obtain a more stable one might be why populations sometimes vote for seemingly destructive social change. The alternative is to maintain a strained status quo in which tensions fail to dissipate and society eventually ruptures, like shattered glass.  

Attractors are all around us, and we should pay more attention to them.

david_m_buss's picture
Professor of Psychology, University of Texas, Austin; Author, When Men Behave Badly

The concept of opportunity costs—the loss of potential gains from alternatives not chosen when a mutually exclusive choice must be made—is one of the most important concepts in the field of economics. But the concept is not well appreciated in the field of psychology.           

One reason for its absence is the sheer difficulty of calculating opportunity costs that occur in metrics other than money. Consider mate choice. Choosing one long-term mate means forgoing the benefits of choosing an available and interested alternative. But how are non-monetary benefits calculated psychologically?           

The complexities are multiple. The benefit-bestowing qualities of passed-over mates are many in number and disparate in nature. And there are inevitable tradeoffs among competing and incommensurate alternatives. Sometimes the choice is between a humorless mate with excellent future job prospects and a fun-loving mate destined for a low-status occupation; or between an attractive mate who carries the costs of incessant attention from others versus a mate who garners little external attention but with whom you have less sexual chemistry. Another intangible quality also factors into the equation—the degree to which competing alternatives appreciate your unique assets, which renders you more irreplaceably valuable to one than the other.           

Uncertainty of assessment surrounds each benefit-conferring quality. It is difficult to determine how emotionally stable someone is without sustained observation through times bad and good—events experienced with a chosen mate but unknown with a foregone alternative. Another complication centers on infidelity and breakups. There is no guarantee that you will receive the benefits of a chosen mate over the long run. Mates higher in desirability are more likely to defect. Whereas less desirable mates are sure bets, more desirable partners represent tempting gambles. How do these mating opportunity costs enter into the complex calculus of mating decisions?           

Despite the difficulties involved in computing non-monetary opportunity costs, probabilistic cues to their recurrent reality over evolutionary time must have forged a psychology designed to assess them, however approximate these computations may be. Although mating decisions provide clear illustrations, the psychology of opportunity costs is more pervasive. Humans surely have evolved a complex multifaceted psychology of opportunity costs, since every behavioral decision at every moment precludes potential benefits from alternative courses of action.

Many of these are trivial—sipping a cappuccino precludes downing a latte. But some are profound and produce post-decision regret, such as missed sexual opportunities or lamenting a true love that got away. The penalties of incorrectly calculating mating opportunity costs can last a lifetime.

katherine_d_kinzler's picture
Associate Professor of Psychology and Associate Professor of Human Development, Cornell University; Author, How You Say It

In 1964 Robert Fantz published a brief paper in Science that revolutionized the study of cognitive development. Building on the idea that infants’ gaze can tell you something about their processing of visual stimuli, he demonstrated that babies respond differently to familiarity and novelty. When infants see the same thing again and again, they look for less and less timethey habituate. When infants next see a new stimulus, they regain their visual interest and look longer. Habituation establishes the status quo—the reality you no longer notice or attend to. 

Subsequent generations of developmental psychologists have expanded on this methodological insight to probe the building blocks of human thinking. Capitalizing on the idea that babies get bored of the familiar and start to look to the novel, researchers can test how infants categorize many aspects of the world as same or different. From this, scientists have been able to investigate humans’ early perceptual and conceptual discriminations of the world. Such studies of early thinking can help reveal signatures of human thinking that can persist into adulthood. 

The basic idea of “habituation” is exceedingly simple at its outset. And humans are not the only species to habituate with familiarityaround the same time as Fantz’s work, related papers studying habituation in other species of infant animals were published. An associated literature on the neural mechanisms of learning and memory similarly finds that neural responses decrease after repeated exposures to the same stimulus. The punch line is clear: Organisms, and their neural responses, get bored.  

This intuitive boredom is etched in our brains and visible in babies first visual responses. But the concept of habituation can also scale up to explain a range of people’s behaviors, their pleasures, and their failures. In many domains of life, adults habituate too. 

If you think about eating an entire chocolate cake, the first slice is almost certainly going to be more pleasurable than the last. It is not hard to imagine being satiated. Indeed, the economic law of diminishing marginal utility describes related idea. The first slice has a high utility or value to the consumer. The last one does not (and may even have negative utility if it makes you sick). Adults’ responses to pleasing stimuli habituate.  

People are often not aware at the outset about how much they habituate. A seminal observation of lottery winners by psychologists Philip Brickman, Dan Coates, and Ronnie Janoff-Bulman found that after time, the happiness of lottery winners returned to baseline. The thrill of winning—and the pleasure associated with new possessions—wore off. Even among non-lottery winners, people overestimate the positive impact that acquiring new possessions will have on their lives. Instead, people habituate to a new status quo of having more things, and those new things become familiar and no longer bring them joy. 

Behavioral economists such as Shane Frederick and George Lowenstein have shown that this “hedonic adaptation,” or reduction in the intensity of an emotional response over time, can occur for both positive and negative life events. In addition to shifting their baseline of what is perceived as normal, people start to respond with less intensity to circumstances to which they are habituated. Over time, highs become less exhilarating, but lows also become less distressing   

Habituation may serve a protective function in helping people cope with difficult life circumstances, but it also can also carry a moral cost. Peoples get used to many circumstances, including those that (without prior experience) would otherwise be considered morally repugnant. Think of the frog in boiling water—it is only because the temperature is raised little by little that he does not jump out. In the (in)famous Milgram studies, participants are asked to shock a confederate by increasing the voltage in small increments. They are not asked to give a potentially lethal shock right at the outset. If you have already given many smaller shocks, the addition of just one more shock may not overwhelm the moral compass. 

Future research exploring the human propensity toward habituation may help explain the situations that lead to moral failuresto Hannah Arendt’s “banality of evil.” Literature on workplace misconduct finds that large transgressions in business contexts often start with small wrongdoings, subtle moral breaches that grow over timeNew studies are testing the ways in which our minds and brains habituate to dishonesty.  

From the looking of babies to the actions of adults, habituation can help explain how people navigate their worlds, interpret familiar and new events, and make both beneficial and immoral choices. Many human tendencies—both good and bad—are composed of smaller components of familiarity, slippery slopes that people become habituated to.

matthew_putman's picture
Applied Physicist; Chairman of the Board, Pioneer Works; CEO, Nanotronics

The world is governed by scientific principles that are fairly well taught and understood. A concept that resonates throughout physical science is rheology, and yet there are so few rheologists in academia that an international symposium fills only a small conference room. Rheo, coming from the Greek to flow, is primarily the study of how non-Newtonian matter flows. In practice, most rheologists look at how nanoparticles behave in complex compounds, often filled with graphene, silica, or carbon nanotubes. Rheology was key to the creation of tires and other polymer systems, but as new technologies incorporate flexible and stretchable devices for electronics, medical implants, regenerative and haptic garments for virtual reality experiences, rheologists will need to be as common as chemists.

The primary reason for the relative obscurity of such an important topic is the hidden complexity involved. The number of connections in the human brain has been discussed by neuroscientists and often invites a sense of awe and inspiration to reduce and unify. Despite some well-known and simple equations, a complex nano-filled elastomer is also nearly impossible to model. Before it is possible to scale the dreams of today, experiments and theoretical models will need to converge in ways that they do not now. Through rheology, van der waal forces, semiconductivity, superconductivity, Quantum tunneling, and other such properties of composite materials can be harnessed and exploited, rather than hinder. Rheology is the old, newly relevant transdisciplinary science.

siobhan_roberts's picture
Director’s Visitor, Institute for Advanced Study, Princeton; Author, Genius at Play and King of Infinite Space

Merriam-Webster’s 2016 word of the year is surreal: “It’s a relatively new word in English, and derives from surrealism, the artistic movement of the early 1900s that attempted to depict the unconscious mind in dreamlike ways as ‘above’ or ‘beyond’ reality. Surreal itself dates to the 1930s, and was first defined in a Merriam-Webster dictionary in 1967. Surreal is often looked up spontaneously in moments of both tragedy and surprise…”

One of the lesser-known applications of the word belongs to the Princeton mathematician John Horton Conway who discovered surreal numbers circa 1969. To this day, he wishes more people knew about the surreals, in hopes that the right person might put them to greater use.

Conway happened upon surreal numbers—an elegant generalization and vast expansion of the real numbers—while analysing games, primarily the game Go, a popular pastime at math departments. The numbers fell out of the games, so to speak, as a means of classifying the moves made by each player and determining who seemed to be winning and by how much. As Conway later described it, the surreals are “best thought of as the most natural collection of numbers that includes both the usual real numbers (which I shall suppose you know) and the infinite ordinal numbers discovered by Georg Cantor.” Originally, Conway called his new number scheme simply capital “N” Numbers, since he felt that they were so natural, and such a natural replacement for all previously known numbers.

For instance, there are some familiar and finite surreal numbers, such as two, minus two, one half, minus one half, etcetera. Cantor’s transfinite “omega” is a surreal number, too, as is the square root of omega, omega squared, omega squared plus one, and so on. The surreals go above and beyond and below and within the reals, slicing off ever-larger infinites and ever-smaller infinitesimals.

They are not called surreal for nothing.

Over the years, the scheme won distinguished converts. Most notably, in 1973 the Stanford computer scientist Donald Knuth spent a week sequestered in an Oslo hotel room in order to write a novella that introduced the concept to the wider world—Surreal Numbers, a love story, in the form of a dialogue between Alice and Bill (now in its twenty-first printing; it was Knuth, in fact, who gave these numbers their name, which Conway adopted, publishing his own expository account, On Numbers and Games, in 1976). Knuth views the surreal numbers as simpler than the reals. He considers the scenario roughly analogous to Euclidean and non-Euclidean geometry; he wonders what the repercussions would be if the surreals had come into existence first.

The Princeton mathematician and physicist Martin Kruskal spent about thirty years investigating the promising utility of surreal numbers. Specifically, he thought that surreals might help in quantum field theory, such as when asymptotic functions veer off the graph. As Kruskal once said: “The usual numbers are very familiar, but at root they have a very complicated structure. Surreals are in every logical, mathematical and aesthetic sense better.”

In 1996, Jacob Lurie won the top prize Westinghouse Science Talent Search, for his project on the computability of surreal numbers. The New York Times reported the news and ran a Q&A with Lurie. “Q: How long have you been working on this? A: It’s not clear when I started or when I finished, but at least for now, I’m finished. All the questions that have yet to be answered are too hard.”

And that’s pretty much where things stand today. Conway still gets interrogated about the surreals on a fairly regular basis—most recently by some post-docs at a holiday party. He repeated for them what he’s said for ages now: Of all his work, he is proudest of the surreals, but they are also a great disappointment since they remain so isolated from other areas of mathematics and science. Per Merriam-Webster, Conway’s response is a combination of awe and dismay. The seemingly infinite potential for the surreals continues to beckon, but for now remains just beyond our grasp.

melanie_swan's picture
Philosophy and Economic Theory, the New School for Social Research

Included Middle is an idea proposed by Stéphane Lupasco (in The Principle of Antagonism and the Logic of Energy in 1951), further developed by Joseph E. Brenner and Basarab Nicolescu, and also supported by Werner Heisenberg. The notion pertains to physics and quantum mechanics, and may have wider application in other domains such as information theory and computing, epistemology, and theories of consciousness. The Included Middle is a theory proposing that logic has a three-part structure. The three parts are the positions of asserting something, the negation of this assertion, and a third position that is neither or both. Lupasco labeled these states A, not-A, and T. The Included Middle stands in opposition to classical logic stemming from Aristotle. In classical logic, the Principle of Non-contradiction specifically proposes an Excluded Middle, that no middle position exists, tertium non datur (there is no third option). In traditional logic, for any proposition, either that proposition is true, or its negation is true (there is either A or not-A). While this could be true for circumscribed domains that contain only A and not-A, there may also be a larger position not captured by these two claims, and that is articulated by the Included Middle.

Heisenberg noticed that there are cases where the straightforward classical logic of A and not-A does not hold. He pointed out how the traditional law of Excluded Middle has to be modified in Quantum Mechanics. In general cases at the macro scale, the law of Excluded Middle would seem to hold. Either there is a table here, or there is not a table here. There is no third position. But in the Quantum Mechanical realm, there are the ideas of superposition and possibility, where both states could be true. Consider Schrödinger’s cat being possibly either dead or alive, until an observer checks and possibility collapses into a reality state. Thus a term of logic is needed to describe this third possible situation, hence the Included Middle. It is not “middle” in the sense of being between A and not-A, that there is a partial table here, but rather in the sense that there is a third position, another state of reality, that contains both A and not-A. This can be conceptualized by appealing to levels of reality. A and not-A exist at one level of reality, and the third position at another. At the level of A and not-A, there are only the two contradictory possibilities. At a higher level of reality, however, there is a larger domain, where both elements could be possible; both elements are members of a larger set of possibilities.

Included Middle is a concept already deployed in a variety of scientific domains and could benefit from a wider application in being promoted to “meme” status. This is because beyond its uses in science, Included Middle is a model for thinking. The Included Middle is a conceptual model that overcomes dualism and opens a frame that is complex and multi-dimensional, not merely one of binary elements and simple linear causality. We have now come to comprehend and address our world as one that is complex as opposed to basic, and formal tools that support this investigation are crucial. The Included Middle helps to expose how our thinking process unfolds. When attempting to grasp anything new, a basic “A, not-A” logic could be the first step in understanding the situation. However, the idea is then to progress to the next step which is another level of thinking that holds both A and not-A. The Included Middle is a more robust model that has properties of both determinacy and indeterminacy, the universal and the particular, the part and the whole, and actuality and possibility. The Included Middle is a position of greater complexity and possibility for addressing any situation. Conceiving of a third space that holds two apparent contradictions of a problem is what the Included Middle might bring to contemporary challenges in consciousness, artificial intelligence, disease pathologies, and unified theories in physics and cosmology.

simone_schnall's picture
Director, Cambridge Embodied Cognition and Emotion Laboratory; Reader in Experimental Social Psychology and Fellow of Jesus College, University of Cambridge

Progressive, dynamic and forward-thinking—these are personal qualities that are highly sought after in practically all social circles and cultures. Do you want to be seen in such positive terms whenever people come across your picture? An intriguing line of psychological research suggests how to accomplish just that: When caught on film you need to pay attention to the direction in which you are facing. People who look toward the right are perceived as more powerful and agentic than those who point to the left. In other words, how a person is represented in space shapes perceivers’ automatic impressions, as if we imagine the depicted person as literally moving from left to right, along an imaginary path that takes them from the present to future accomplishments.

This principle of “spatial agency bias” includes how simple actions are interpreted. For example, a soccer goal is considered to be more elegant, and an act of aggression to be more forceful when the actor moves from left to right, compared to the mirrored sequence occurring in the opposite direction. Similarly, in advertising cars are usually shown as facing to the right, and when they are, participants in research studies judge them to be faster and therefore more desirable.

Spatial position can also be indicative of social status. Historical analyses of hundreds of paintings indicate that when two people appear in the same picture the more dominant, powerful person is usually facing to the right. For example, relative to men, women are more often displayed showing the left cheek, consistent with gender roles that consider them as less agentic. In other words, traditionally weak and submissive characters are assigned to their respective place by where they are situated in space. From the 15th century to the 20th century, however, this gender bias in paintings has become less pronounced, therefore paralleling increasingly modern views of women’s role in society.

Where does the spatial agency bias come from? Is there some innate reason for preferring objects and persons to the right, perhaps as a consequence of 90% of people being right-handed? Or is there a learning component involved? Cross-cultural studies indicate that there is variability indeed. For example, for Arab and Hebrew speakers the pattern is completely reversed: People and objects facing to the left are judged to be more dynamic and agentic. This suggests a provocative possibility, namely that the spatial agency bias develops as a function of writing direction: As we move across the page we progress from what has happened to what is not yet, from what is established to what could still be. Years of experience with printed matter determine the ways in which we expect actions to unfold. Thought therefore follows language, in a rather literal sense.

So, next time you take that selfie make sure it reflects you from the right perspective! 

andr_s_roemer's picture
Co-creator, Ideas City; Author, Move UP: Why Some Cultures Advance While Others Don't

A few months ago I sequenced my genes. It is a 700 Megabyte text file that looks something like this:

AGCCCCTCAGGAGTCCGGCCACATGGAAACTCCTCATTCCGGAGGTCAGTCAGATTTACCCTTGAGTTCAAACTTCAGGGTCCAGAGGCTGATAATCTACTTACCCAAACATAGGGCTCACCTTGGCGTCGCGTCCGGCGGCAAACTAAGAACACGTCGTCTAAATGACTTCTTAAAGTAGAATAGCGTGTTCTCTCCTTCCAGCCTCCGAAAAACTCGGACCAAAGATCAGGCTTGTCCGTTCTTCGCTAGTGATGAGACTGCGCCTCTGTTCGTACAACCAATTTAGG

Each individual A, C, G and T are organic molecules that form the building blocks of what makes me "me": my DNA. It is approximately 3.3 billion pairs of nucleotides organized in around 24 thousand genes. 

The information of every living being is codified in this manner. Our shape, our capacities, abilities, needs and even predisposition to disease are determined largely by our genes. 

But this information is only a small percentage (less than 2%) of what can be found in the DNA that each of the cells of my body carry. That is the percentage of the DNA that encodes proteins, the molecules that carry out all the functions that are necessary for life. The other 98% is known as non-coding DNA and as of 2016 we believe only an extra 10% to 15% have a biological function that show complex patterns of expression and regulation while the rest is still largely referred to as "junk DNA". This does not necessarily mean that the majority of our DNA is junk, it only means that we still do not know why it is there nor what it does. The human genome still has many tricks up its sleeves.

The story of how our DNA is expressed and regulated is the story of the transcriptome, and despite all our technological advances, its study is still in its early stages but already showing enormous potential to better diagnose, treat and cure disease.

In order for our DNA to be expressed and produce a specific protein, the code must be "copied" (transcribed) into RNA. These gene readouts are called transcripts, and the transcriptome is the collection of all the RNA molecules, or transcripts, present in a cell. 

In contrast with the genome, which is characterized by its stability, the transcriptome is constantly changing and can reflect in real time, at the molecular level, the physiology of a person depending on many factors, including stage of development and environmental conditions.

The transcriptome can tell us when and where each gene is turned on or off in the cells of tissues and organs of an individual. It functions like a dimmer switch, setting whether a gene is 10% active, or 70% active, and therefore enabling a much more intricate fine-tuning of gene expression. By comparing the transcriptome of different types of cells we can understand what makes a specific cell from a specific organ unique, how does that cell look when working normally and healthy and how its gene activity may reflect or contribute to certain diseases.

The transcriptome may hold the key to the breakthrough we have been waiting for over the last 30 years in gene therapy. There are today two complementary yet different approaches: the replacement or editing of genes within the genome (such as the widely known CRISPR-Cas9 technique) and the inhibition or enhancement of gene expression.

On the latter approach, RNA based cancer vaccines that activate an individual’s innate immune system are already in clinical trials with promising results in diseases such as lung or prostate cancer. The vaccination with RNA molecules is a promising and safe approach to let the patient’s body produce its own vaccines. By introducing a specific synthetic RNA, the protein synthesis can be controlled without intervening in the human genome and by letting the cell's own protein building machinery work without altering the physiological state of the cell.

This concept will unlock a path to a prosperous future in terms of aging prevention, brain functioning and stem cell health, as well as the eradication of cancer, hepatitis B, HIV or even high cholesterol. The transcriptome has opened our eyes to the mind-staggering complexity of the cell and when fully fathomed, it will finally enable us to truly start conquering our genetic destiny. 

david_c_queller's picture
Evolutionary Biologist, Washington University in St. Louis

You won’t find the term “isolation mismatch” in any scientific dictionary. Isolation mismatches occur when two complex adaptive systems cannot be merged after evolving in isolation from each other. It is a generalization of a concept that you might find in a scientific dictionary, “Dobzhansky-Muller incompatibilities,” which cause isolated biological populations to become separate species. When ported into the realm of culture, isolation mismatches might explain how cultures (not species) can diverge and become incompatible. It might also account for our disconcerting human tendencies towards xenophobia or fear of other human cultures. But let’s consider biology first, then the cultural analogy.

Formation of new biological species usually involves isolation and independent evolution of the two populations. As one toad population evolves through natural selection, each of its novel genes is tried out in many toads and will necessarily be selected to work together with the other genes in its population. Any genes causing within-population mismatches are weeded out. But, novel genes in one toad population are never tested with novel genes in another isolated toad population. Between-population mismatches will not be weeded out and will gradually accumulate, becoming apparent only when the populations later come into contact. 

This is the dominant model (though not only one) of how new species form. Interestingly, where much of evolution consists of fine-tuned adaptation, the formation of species in this manner is not directly adaptive. It is an accident of isolation, though such accidents will always happen given enough time. There is, however, an add-on mechanism called reinforcement that is not accidental. When two partially incompatible populations come together, selection may directly favor reduced interbreeding. Individuals will have fewer mismatches and more viable offspring if they preferentially mate with their own type. For example, females of two closely related species of spadefoot toads have evolved to prefer the male call of their own type only in areas where the two species overlap.

Similar isolation mismatches may occur in other complex adaptive systems. Let two initially compatible systems evolve separately for long enough and they will accrue mismatches. This includes cultural systems where what changes is the cultural equivalent of genes, which Richard Dawkins calls memes. For example, languages split, evolve independently, and become mutually unintelligible. Mixing diverged norms of social behavior can also cause mismatches. Acceptable behavior by man towards another man’s wife in one’s own culture might prove to be fatal in another culture.

We are all familiar with isolation mismatches in technology. We see them when we travel between countries that drive on opposite sides of the road or have railroads of different gauges or have electrical systems with different voltages and outlets. There is no logical reason for the mismatches; they just evolved incompatibly in different locations. We are also familiar with isolation mismatches resulting from computer system upgrades, causing some programs that worked fine with the old system to crash. 

Of course, software engineers can usually catch and correct such mismatches before a program’s release. But they can be defeated if they don’t know or don’t care about a dependent program, or if they may have incomplete understanding of the possible interactions in complex code. I suggest that throughout most of human cultural history, these two conditions often apply; cultures changed without concern about compatibility with other cultures, and people did not understand how new cultural traits will interact.

Consider the adoption of maize by Europeans from the Amerindians. The crop spread widely because of its high yield but it also caused pellagra, eventually traced to a deficiency of the vitamin niacin. The Amerindians suffered no such problem because they dehulled their corn by nixtamalization, a process involving soaking and cooking in alkaline solution. Nixtamalization also happens to prevent pellagra, possibly by releasing the niacin in maize from indigestible complexes. An 18th-century Italian maize farmer who dehulled by his own culture’s supposedly superior mechanical methods got a very harmful isolation mismatch. If he had also adopted nixtamalization, he would have been fine, but he had no way of knowing. Only being more conservative about adopting foreign traits would have saved him.

So imagine a history of thousands of years of semi-isolated human bands, each evolving its own numerous cultural adaptations. Individuals or groups that allow too much cultural exchange with different cultures would experience cultural mismatches and have decreased fitness. So a process parallel to reinforcement might be expected to occur. Selection could favor individuals or groups that avoid, shun, and perhaps even hate other cultures, much as the female spadefoot toads evolved to shun males of the other species.  Xenophobic individuals or groups would be successful and propagate any genes or memes underlying xenophobia. For the same reason, selection could favor the adoption of cultural or ethnic markers that make it clear who belongs to your group, as suggested by anthropologist Richard McElreath and colleagues.

Many questions remain to be addressed. Is xenophobia selected by genic or cultural selection? Is it individuals or groups that are selected? How much isolation is required?  Did prehistoric human groups frequently encounter other groups with sufficiently different cultures? When might selection favor the opposite of xenophobia, given that xenophobia can also lead to rejecting traits would have been advantageous?

It should be stressed that no explanation of xenophobia, including this one, provides any moral justification for it. Indeed understanding the roots of xenophobia might provide ways to mitigate it. The mismatch explanation is a relatively optimistic one compared to the hypothesis that xenophobia is a genetic adaptation based on competition between groups for resources. If correct, it tells us that the true objects of our evolutionary ire are certain cultural traits, not people. Moreover, even those traits work fine in their own context and, like software engineers, we may be able to figure out how to get them to work together. 

peter_norvig's picture
Director of Research, Google Inc.; Fellow of the AAAI and the ACM; co-author, Artificial Intelligence: A Modern Approach

John McCarthy, the late co-founder of the field of artificial intelligence, wrote, "He who refuses to do arithmetic is doomed to talk nonsense." It seemed incongruous that a professor who worked with esoteric high level math would be touting simple arithmetic, but he was right; in fact in many cases all we need to avoid nonsense is the simplest form of arithmetic: counting.  

In 2008, the US government approved a $700 billion bank bailout package. A search for the phrase "$700 million bailout" reveals hundreds of writers who were eager to debate whether this was a prudent or rash, but who couldn't distinguish the difference between $2 per citizen and $2,000 per citizen. Knowing the difference is crucial to understanding the efficacy of the deal, and is just a matter of counting.

Consider the case of a patient who undergoes a routine medical screening and tests positive for a disease that affects 1% of the population. The screening test is known to be 90% accurate. What is the chance that the patient actually has the disease? When a group of trained physicians were asked, their average answer was 75%. They reasoned that it should be somewhat less than the 90% accuracy of the test, because the disease is rare. But if they had bothered to count, they would have reasoned like this: on average, out of every 100 people, 1 will have the disease, and 99 won't. The 1 will have a positive test result, and so will about 10 of the 99 (because the test is 10% inaccurate). So we have a pool of 11 people who test positive, of which only 1 actually has the disease, so the chance is about 1/11 or 9%. That means that highly trained physicians are reasoning very poorly, scaring patients with an estimate that is much too high, all because they didn't bother to count. The physicians might say they are trained to do medicine, not probability theory, but as Pierre Laplace said in 1812, "Probability theory is nothing but common sense reduced to calculation," and the basis of probability theory is just counting: "The probability of an event is the ratio of the number of cases favorable to it, to the number of all cases possible."

In their report on climate change, the IPCC stated the consensus view that "most of the observed warming over the last 50 years is likely to have been due to the increase in greenhouse gas concentrations." Yet some criticized this consensus, saying scientists are still uncertain. Who's right? Naomi Oreskes took it upon herself to resolve the question, by counting. She searched a scientific database using the keywords "climate change" and scrutinized the 928 abstracts that matched. She found that 25% of the articles did not address the consensus (because, for example, they were about regional climate rather than global), but that none of the 928 rejected the consensus. This is a powerful form of counting. What's more, I don't need to take Prof. Oreskes's word for it: I did my own experiment, sampling 25 abstracts (I didn't have the patience to do 928) and I too found that none of them rejected the consensus.

This is a powerful tool. When faced with a complex issue, you have the capability to resolve the question, not by examining your political predispositions and arguing for whatever agrees with them, but by examining the evidence and counting the number of cases that are favorable, and comparing to those that aren't. You don't need to be a mathematical wizard; just apply what you learned as a toddler: count!

john_naughton's picture
Senior Research Fellow, Centre for Research in the Arts, Social Sciences and Humanities, University of Cambridge; Director, Wolfson College Press Fellowship Programme; Columnist, the Observer; Author, From Gutenberg to Zuckerberg

W. Ross Ashby was a British cybernetician working in the 1950s who became interested in the phenomenon of homeostasis—the way in which complex systems operating in changing environments succeed in maintaining critical variables (for example, internal body temperature in biological systems) within tightly-defined limits. Ashby came up with the concept of variety as a measurement of the number of possible states of a system. His "Law" of Requisite Variety stated that for a system to be stable, the number of states that its control mechanism is capable of attaining (its variety) must be greater than or equal to the number of states in the system being controlled.

Ashby’s Law was framed in the context of his interest in self-regulating biological systems, but it was rapidly seen as having a relevance for other kinds of systems. The British cybernetician and operational research practitioner Stafford Beer, for example, used it as the basis for his concept of a viable system in organizational design. In colloquial terms Ashby’s Law has come to be understood as a simple proposition: if a system is to be able to deal successfully with the diversity of challenges that its environment produces, then it needs to have a repertoire of responses which is (at least) as nuanced as the problems thrown up by the environment. So a viable system is one that can handle the variability of its environment. Or, as Ashby put it, only variety can absorb variety.

Until comparatively recently, organizations coped with environmental challenges mainly by measures to reduce the variety with which they had to cope. Mass production, for example, reduced the variety of its environment by limiting the range of choice available to consumers: product standardization was essentially an extrapolation of Henry Ford’s slogan that customers could have the Model T in any color so long as it was black. But the rise of the Internet has made variety-reduction increasingly difficult. By any metric that one chooses—numbers of users and publishers, density of interactions between agents, pace of change, to name just three—our contemporary information ecosystem is orders of magnitude more complex than it was forty years ago. And its variety, in Ashby’s terms, has increased in proportion to its complexity. Given that variety reduction seems unfeasible in this new situation, the implication is that many of our organizations and social systems—ones that that evolved to cope with much lower levels of variety—are no longer viable. For them, the path back to viability requires that they have to find ways of increasing their variety. And the big question is whether—and how—they can do it.

antony_garrett_lisi's picture
Theoretical physicist

Thought, passion, love... this internal world we experience, including all the meaning and purpose in our lives, arises naturally from the interactions of elementary particles. This sounds absurd, but it is so. Scientists have attained a full understanding of all fundamental interactions that can happen at scales ranging from subatomic particles to the size of our solar system. There is magic in our world, but it is not from external forces that act on us or through us. Our fates are not guided by mystical energies or the motions of the planets against the stars. We know better now. We know that the magic of life comes from emergence.

It is the unimaginably large numbers of interactions that make this magic possible. To describe romantic love as the timely mutual squirt of oxytocin trivializes the concerted dance of more molecules than there are stars in the observable universe. The numbers are beyond astronomical. There are approximately 100 trillion atoms in each human cell, and about 100 trillion cells in each human. And the number of possible interactions rises exponentially with the number of atoms. It is the emergent qualities of this vast cosmos of interacting entities that make us us. In principle, it would be possible to use a sufficiently powerful computer to simulate the interactions of this myriad of atoms and reproduce all our perceptions, experiences, and emotions. But to simulate something does not mean you understand the thing—it only means you understand a thing’s parts and their interactions well enough to simulate it. This is the triumph and tragedy of our most ancient and powerful method of science: analysis—understanding a thing as the sum of its parts and their actions. We have learned and benefitted from this method, but we have also learned its limits. When the number of parts becomes huge, such as for atoms making up a human, analysis is practically useless for understanding the system—even though the system does emerge from its parts and their interactions. We can more effectively understand an entity using principles deduced from experiments at or near its own level of distance scale—its own stratum.

The emergent strata of the world are roughly recapitulated by the hierarchy of our major scientific subjects. Atomic physics emerges from particle physics and quantum field theory, chemistry emerges from atomic physics, biochemistry from chemistry, biology from biochemistry, neuroscience from biology, cognitive science from neuroscience, psychology from cognitive science, sociology from psychology, economics from sociology, and so on. This hierarchical sequence of strata, from low to high, is not exact or linear—other fields, such as computer science and environmental science, branch in and out depending on their relevance, and mathematics and the constraints of physics apply throughout. But the general pattern of emergence in a sequence is clear: at each higher level, new behavior and properties appear which are not obvious from the interactions of the constituent entities in the level below, but do arise from them. The chemical properties of collections of molecules, such as acidity, can be described and modeled, inefficiently, using particle physics (two levels below), but it is much more practical to describe chemistry, including acidity, using principles derived within its own contextual level, and perhaps one level down, with principles of atomic physics. One would almost never think about acidity in terms of particle physics, because it is too far removed. And emergence is not just the converse of reduction. With each climb up the ladder of emergence to a higher level in the hierarchy, it is the cumulative side-effects of interactions of large numbers of constituents that result in qualitatively new properties that are best understood within the context of the new level.

Every step up the ladder to a new stratum is usually associated with an increase in complexity. And the complexities compound. Thermodynamically, this compounding of complexity—and activity at a higher level—requires a readily available source of energy to drive it, and a place to dump the resulting heat. If the energy source disappears, or if the heat cannot be expelled, complexity necessarily decays into entropy. Within a viable environment, at every high level of emergence, complexity and behavior is shaped by evolution through natural selection. For example, human goals, meaning, and purposes exist as emergent aspects in psychology favored by natural selection. The ladder of emergence precludes the necessity for any supernatural influence in our world; natural emergence is all it takes to create all the magic of life from building blocks of simple inanimate matter. Once we think we understand things at a high level in the hierarchy of emergence, we often ignore the ladder we used to get there from much lower levels. But we should never forget the ladder is there—that we and everything in our inner and outer world are emergent structures arising in many strata from a comprehensible scientific foundation. And we also should not forget an important question this raises: is there an ultimate fundamental level of this hierarchy, and are we close to knowing it, or is it emergence all the way down?

tom_griffiths's picture
Henry R. Luce Professor of Information Technology, Consciousness and Culture, Director of the Computational Cognitive Science Lab, Princeton University; Co-author (with Brian Christian), Algorithms to Live By

How are we supposed to act? To reason, to make decisions, to learn? The classic answer to this question, hammered out over hundreds of years and burnished to a fine luster in the middle of the last century, is simple: update your beliefs in accordance with probability theory and choose the action that maximizes your expected utility. There’s only one problem with this answer: it doesn’t work.

There are two ways in which it doesn’t work. First, it doesn’t describe how people actually act. People systematically deviate from the prescriptions of probability and expected utility. Those deviations are often taken as evidence of irrationality—of our human foibles getting in the way of our aspirations to intelligent action. However, human beings remain the best examples we have of systems that are capable of anything like intelligent action in many domains. Another interpretation of these deviations is thus that we are comparing people to the wrong standard.

The second way in which the classic notion of rationality falls short is that it is unattainable for real agents. Updating beliefs in accordance with probability theory and choosing the action that maximizes expected utility can quickly turn into intractable computational problems. If you want to design an agent that is actually capable of intelligent action in the real world, you need to take into account not just the quality of the chosen action but also how long it took to choose that action. Deciding that you should pull a pedestrian out of the path of an oncoming car isn’t very useful if it takes more than a few seconds to make the decision.

What we need is a better standard of rational action for real agents. Fortunately, artificial intelligence researchers have developed one: bounded optimality. The bounded-optimal agent navigates the tradeoff between efficiency and error, optimizing not the action that is taken but the algorithm that is used to choose that action. Taking into account the computational resources available to the agent and the cost of using those resources to think rather than act, bounded optimality is about thinking just the right amount before acting.

Bounded optimality deserves to be more widely known because of its implications for both machines and people. As artificial intelligence systems play larger roles in our lives, understanding the tradeoffs that inform their design is critical to understanding the actions that they take—machines are already making decisions that affect the lives of pedestrians. But understanding the same tradeoffs is just as important to thinking about the design of those pedestrians. Human cognition is finely tuned to make the most of limited on-board computational resources. With a more nuanced notion of what constitutes rational action, we might be better able to understand human behavior that would otherwise seem irrational.

brian_eno's picture
Artist; Composer; Recording Producer: U2, Coldplay, Talking Heads, Paul Simon; Recording Artist

The great promise of the Internet was that more information would automatically yield better decisions. The great disappointment is that more information actually yields more possibilities to confirm what you already believed anyway. 

scott_draves's picture
Software Artist

In 1620 Sir Francis Bacon published Novum Organum and kicked off the scientific revolution by defining its basic method: hypothesis, experiment, and result. By 1687 we had Newton’s Principia, and the rest is history.

Today, public primary schools teach the Scientific Method. It’s well known.

It turns out that following the method is not so simple. People, including scientists, are not perfectly rational. People have biases and even when we try to be good, sometimes, unconsciously, we do wrong. When the outcome of an experiment has career implications, things start to get complicated. And when the outcome has financial implications for a powerful institution, people have been known to actively game the system. For example, starting in 1953 the Tobacco Industry Research Committee waged a war on truth, until it was dissolved as part of the master settlement in 1998.

The stakes are high. Tobacco killed 100 million people in the 20th century. Climate change threatens our very way of life.

In the years since the 17th century, science has developed a much more detailed playbook for the scientific method, in order to defend itself against bias. Double Blind experiments are an essential part of the modern gold standard scientific method.

What is a double blind experiment? Consider this scenario:

A pharmaceutical company develops a new arthritis pill, and hires you to prove its efficacy. The obvious experiment is to give the drug to a group of people, and ask them if it relieved their pain. What’s not so obvious is that you should also have a control group. These subjects get placebos (inactive pills) and this is done without the subject’s knowledge of which kind of pill they get. That’s a single blind experiment, and the idea is that it prevents the subjects’ expectations or desires that this pill will do something that influences their reported results.

There is a remaining problem, however. You, the experimenter, also have expectations and desires, and those could be communicated to the subjects, or influence how the data is recorded. Another layer of blindness can be introduced, so the experimenter does not know which pills are which as they are administered, the subjects are surveyed, and the data is collected. Normally this is done by using a third party to randomly assign the subjects to the groups, and keeping the assignment secret from even the researchers, until after the experiment is complete. The result is a double blind experiment: Both the subjects and experimenters are unaware of who got what.

The first single blind experiment was performed in 1784 by Benjamin Franklin and Antoine Lavoisier. They were commissioned by the French Academy of Sciences to investigate Franz Mesmer’s claims of animal magnetism. The claims were debunked.

The first recorded double blind experiment was done in in 1835 in Nürnberg, in what was then Bavaria. Friedrich Wilhelm von Hoven, a public health official and hospital administrator got into a public dispute with Johann Jacob Reuter, who claimed odds were 10 to 1 that a single grain of salt dissolved in 100 drops of snow-melt, and then diluted 30 times by a factor of 100 each time would produce “extraordinary sensations” in one who drank it. Twenty-five samples of homeopathic salt-water and 25 samples of plain distilled water were randomly assigned to the subjects. The assignment was sealed, and the water was administered. In the end, 8 subjects did report feeling something, but 3 of those had actually had plain water, the placebo. Reuter lost the “bet” by the rules they had agreed on in advance.

That was huge progress, and science and medicine have come a long way as a result.

Invaluable though double blind experiments are, the process is still imperfect. The existence of one double blind study cannot be considered conclusive. If the question is of consequence and you do your research you will like find many competing studies, authors, and institutions. Reputations and careers come into play. Research labs get long-term funding from corporations and governments who have skin in the game, and over time their influence works its magic.

Truth can be lost in the hall of mirrors of the Internet.

There is a constant struggle in science to distinguish signal from noise, to discern a pattern in experimental data, and to come up with a theory that explains it all. That’s hard enough! But there is another, deeper struggle, as well, against bias and influence, both in ourselves and in society.

This struggle is not just against our ignorance, but also against intentional adversaries to the process. Over and over again, blinding has proven itself as a key to fighting biases, and discovering truth more quickly.

It’s ironic that controlled blindness is an engine of insight, and even perhaps a cornerstone of our civilization.

joshua_bongard's picture
Cyril G. Veinott Green and Gold Professor, Department of Computer Science, University of Vermont Author, How the Body Shapes the Way We Think

I.J. Good, a collaborator of Alan Turing’s during World War II, once wrote that a machine more intelligent than humans would be the “last invention” we make. Many believe that our society is about to reach this point, as smart machines increasingly invade our daily lives and imaginations. This invasion is proceeding along two fronts. Such machines are increasingly to blame for economic change, leading some people to question the necessity of work altogether. But, more deeply, we wonder whether such ultraintelligent machines will be “like us”: they may think, but will they feel?

To help anchor these latter kinds of questions, Stevan Harnad formulated the symbol grounding problem. For a machine, it’s problematic to learn what the symbols that make up human languages mean by simply transforming heard words into spoken words. It’s like trying to learn Egyptian hieroglyphs using a dictionary in which each hieroglyph is defined using other hieroglyphs. Consider this exchange I just had with the most high-powered chatbot on the web:

Me:   Hi there.
It:               Hello, how are you?
Me:   Not too bad. You?
It:               Fine thank you.
Me:   I've been thinking lately.
It:               Thinking about what?
Me:   The symbol grounding problem.
It:               Can you tell me a happy story?

The solution to the symbol grounding problem is to ground symbols in the rich soil of physical experience. For example, a robot may learn that the symbol “jump” correlates with sudden loss of pressure of the soles of its feet. This robot may begin to empathize with humans by asking one of them to jump, observing her feet leaving the ground, and inferring that she feels the same things that it feels when it jumps. Finally, such a machine may recursively ground increasingly abstract symbols in these already-grounded symbols, until eventually it understands the English idiom “don’t jump to conclusions” the first time it hears it.

The public should care about this seemingly esoteric corner of cognitive science because machines that do not ground symbols in this way will be dangerous. Consider my exchange with the chatbot. Not only did it not know that its final comment was “wrong,” but it failed to predict that I would be frustrated or amused by the comment. Similarly, another machine may fail to predict my terrified response to its actions.

Current machines can now, after receiving a million photographs containing a human and another million that do not, tell you whether or not a new photograph contains a human, without having to ground symbols in experience. But consider another data set, composed of two million conversations: In the first million, the speakers are discussing how best to help Bob; in the second million, they are conspiring to harm him. Current state-of-the-art machines cannot tell you whether the speakers in a new conversation intend to help or harm Bob.

Most humans can listen to a conversation and predict whether the person being discussed is in danger. It may be that we can do so because we have heard enough discussions in real life, books, and movies to be able to generalize to the current conversation, not unlike computers that recognize humans in previously-unseen photographs. However, we can also empathize by connecting words, images, and physical experience: We can put ourselves in the shoes of the people talking about Bob, or into the shoes of Bob himself. If one speaker says “one good turn deserves another” and follows it with a sarcastic sneer, we can take those verbal symbols (“one,” “good,” …), combine them with the visual cue, and do some mental simulation.

First, we can go back in time to inhabit Bob’s body mentally, and imagine him/us acting in a way that lessens the speaker’s hunger or assuages another of her physical or emotional pains. We can then return to the present as ourselves and imagine saying what she said. We will not follow up the statement with a sneer, as she did. Our prediction has failed.

So, our brain will return to the past, inhabit Bob’s body again, but this time mentally simulate hurting the speaker in some way. During the act we transfer into the speaker’s body and suffer her pain. Back in the present, we would imagine ourselves saying the same words. Also, feelings of anticipated revenge would be bubbling up inside us, bringing a sneer to our lips, thus matching the speaker’s sneer. So: we predict that the speakers wish to harm Bob.

Growing evidence from neuroscience indicates that heard words light up most parts of the brain, not just some localized language module. Could this indicate a person twisting words, actions, their own former felt experiences, and mental body snatching into sensory/action/experiential braided cables? Might these cables support a bridge from the actions and feelings of others to our own actions and feelings, and back again?

These machines may be useful and even empathetic. But would they be conscious? Consciousness is currently beyond the reach of science, but one can wonder. If I “feel” your pain, the subject and the object are clear: I am the subject and you are the object. But if I feel the pain of my own stubbed toe, the subject and object are not as obvious. Or are they? If two humans can connect by empathizing with each other, cannot two parts of my brain empathize with each other when I hurt myself? Perhaps feelings are verbs instead of nouns: they may be specific exchanges between cell clusters. May consciousness then not simply be a fractal arrangement of ever smaller sensory/motor/experiential braids grounding the ones above them? If myths tell us that the Earth is flat and rests on the back of a giant turtle, we might ask what holds up the turtle. The answer, of course, is that it’s turtles all the way down. Perhaps consciousness is simply empathy between cell clusters, all the way down.

lisa_feldman_barrett's picture
University Distinguished Professor of Psychology, Northeastern University; Research Neuroscientist, Massachusetts General Hospital; Lecturer in Psychiatry, Harvard Medical School; Author, Seven and a Half Lessons About the Brain

Right now, as your eyes glide across this text, you are effortlessly understanding letters and words. How does your brain accomplish this remarkable feat, converting blobs of ink (or patterns of tiny pixels) into full-fledged ideas? Your brain uses concepts that you’ve accumulated throughout your lifetime. Each letter of the alphabet, each word, and each sequence of words that stands for an idea is represented in your brain by concepts. Even more remarkably, you can often comprehend things you’ve never seen before, like a brand new word in the middle of a sentence. You can see an unfamiliar breed of dog and still instantly know it’s a dog. How does your brain achieve these everyday marvels? The answer is: concepts in combination.

Most scientists will tell you that your brain contains a storehouse of concepts to categorize the objects and events around you. In this view, concepts are like dictionary definitions stored in your brain, such as “A pet is an animal companion that lives with you.” Each concept is said to have an unchanging core that’s shared by all individuals. Decades of research, however, show this is not the case. A concept is a dynamic pattern of neural activity. Your brain does not store and retrieve concepts—it makes concepts on the fly, as needed, in its network of billions of communicating neurons. Each time you construct the “same” concept, such as “Dog,” the neural pattern is different. This means a concept is a population of variable instances, not a single static instance, and your mind is a computational moment within a constantly predicting brain.

Whenever your brain encounters any sensory inputs, whether familiar or novel, it tries to produce an answer to the question, “What is this like?” In doing so, your brain constructs a concept out of bits and pieces of past experience. This process is called conceptual combination. Without it, you’d be experientially blind to anything you hadn’t encountered before.

Conceptual combination occurs every time your brain makes a concept for use, but it’s easiest to imagine when the combination is explicit, like the concept “Purple Elephant with Wings.” As another example, consider the science fiction movie The Matrix, when the shocking secret is revealed that the matrix is powered by electrical hookups to live human bodies. You need conceptual combination to construct the novel concept “Person as a Battery” in order to experience the horror.

The more familiar a concept—that is, the more frequently you’ve constructed it—the more efficiently your brain can make it by conceptual combination. Your brain requires less energy to construct the concept “Dog” than the combination, “Hairy, friendly, loyal animal with two eyes, four legs, and a slobbering tongue, who makes a barking sound, eats processed food from a bowl, and rescues children from danger in Disney movies.” That sort of combination is what your brain would have to do if it created the concept “Dog” for the first time. The word “dog” then helps your brain create the concept efficiently in the future. That’s what happened in recent years with the concept “Hangry,” which began as a combination of “Hungry” and “Angry” and “Irritable” but is now more efficiently constructed in many American brains.

You experience the effort of conceptual combination when you venture to a new culture full of unfamiliar concepts. Some concepts are universally known—a face is a face in any culture—but plenty are culture-specific, such as social concepts that serve as the glue for civilization. For example, in the United States, we have a concept, ”a thumbs-up gesture indicates that all is well.” Some other cultures, however, don’t have this concept; to them, the same hand gesture is an insult. These kinds of conceptual differences are a major reason why culture-switching is stressful and communication across cultures can be perilous.

Conceptual combination can also be fun. Anytime you’ve laughed at a stand-up comic who juxtaposed two unrelated ideas in a humorous way, you’re combining concepts. Innovation, the holy grail of business success, is effectively conceptual combination for profit.

Some brains are unable to do conceptual combination. Temple Grandin, one of the most eloquent writers with autism, describes her difficulties with conceptual combination in How Does Visual Thinking Work in the Mind of a Person with Autism: “When I was a child, I categorized dogs from cats by sorting the animals by size. All the dogs in our neighborhood were large until our neighbors got a Dachshund. I remember looking at the small dog and trying to figure out why she was not a cat.” Naoki Higashida, a teenager with autism, answers the question “What is this like?” by deliberately searching his memory, rather than automatically constructing the best fitting instance as most people’s brains do. “First, I scan my memory to find an experience closest to what’s happening now,” he writes in The Reason I Jump. “When I’ve found a good close match, my next step is to try to recall what I said the last time. If I’m lucky, I hit upon a usable experience and all is well.” If Naoki is unlucky, he becomes flustered, unable to communicate.

Scientists consider conceptual combination to be one of the most powerful abilities of the human brain. It’s not just for making novel concepts on the fly. It is the normal process by which your brain constructs concepts. Conceptual combination is the basis for most perception and action.

cristine_h_legare's picture
Associate Professor, Department of Psychology, The University of Texas at Austin; Director, Cognition, Culture, and Development Lab

Over the 7 million years since humans and chimpanzees shared a common ancestor, the inventory of human tools has gone from a handful of stone implements to a technological repertoire capable of replicating DNA, splitting atoms, and interplanetary travel. In the same evolutionary timespan, the chimpanzee toolkit has remained relatively rudimentary. It was "tool innovation"—constructing new tools or using old tools in new ways—that proved crucial in driving increasing technological complexity over the course of human history.

How can we explain this wide divergence in technological complexity between such closely related primate species? One possibility is that humans are unique among primate species in our capacity to innovate. If so, we might expect that innovation would be early-developing like walking or language acquisition. And yet there is little evidence for precocious innovation in early childhood. Although young children are inquisitive and keen to explore the world around them, they are astonishingly poor at solitary tool innovation. New Caledonian crows and great apes outperform young children in tool innovation tasks. This is particularly striking given the dazzling technological and social innovations associated with human culture. How does a species with offspring so bad at innovation become so good at it? 

Technological complexity is the outcome of our species’ remarkable capacity for cumulative culture; innovations build on each other and are progressively incorporated into a population's stock of skills and knowledge, generating ever more sophisticated repertoires. Innovation is necessary to ensure cultural and individual adaptation to novel and changing challenges, as humans spread to every corner of the planet. Cultural evolution makes individuals more innovative by allowing for the accumulation of prefabricated solutions to problems that can be recombined to create new technologies. The subcomponents of technology are typically too complex for individuals to develop from scratch. The cultural inheritance of the technologies of previous generations allows for the explosive growth of cultural complexity.

Children are cultural novices. Much of their time is spent trying to become like those around them—to do what they do, speak like they do, play and reason like they do. Their motivation to learn from, and imitate, others allows children to benefit from and build upon cumulative cultural transmission. Cumulative culture requires the high fidelity transmission of two qualitatively different abilities—instrumental skills (e.g., how to keep warm during winter) and social conventions (e.g., how to perform a ceremonial dance). Children acquire these skills through high fidelity imitation and behavioral conformity. These abilities afford the rapid acquisition of behavior more complex than could ever otherwise be learned exclusively through individual discovery or trial-and-error learning.

Children often copy when uncertain. This proclivity is enormously useful given that a vast amount of behavior that we engage in is opaque from the perspective of physical causality. High-fidelity imitation is an adaptive human strategy facilitating more rapid social learning of instrumental skills than would be possible if copying required a full causal representation of an event. So adaptive, in fact, that it is often employed at the expense of efficiency, as seen when kids "over-imitate" behavior that is not causally relevant to accomplishing a particular task.

The unique demands of acquiring instrumental skills and social conventions such as rituals provide insight into when children imitate, when they innovate, and to what degree. Instrumental behavior is outcome-oriented. Innovation often improves the efficiency of solving defined problems. When learning instrumental skills, with an increase in experience, high-fidelity imitation decreases. In contrast, conventional behavior is process-oriented. The goals are affiliation and group inclusion. When learning social conventions, imitative fidelity stays high, regardless of experience, and innovation stays low. Indeed, innovation impedes learning well-prescribed social conventions. Imitation and innovation work in tandem, deployed at different times for different purposes, to support learning group-specific skills and practices. The distinct goals of instrumental skills and social conventions drive cumulative culture and provide insight into human cognitive architecture.

Cumulative culture affords collective insights of previous generations to be harnessed for future discoveries in ways that are more powerful than the solitary brainpower of even the most intelligent individuals. Our capacity to build upon the innovations of others within and across generations drives our technological success. The capacity for cumulative culture has set our genus Homo on an evolutionary pathway remarkably distinct from the one traversed by all other species.

ernst_p_ppel's picture
Head of Research Group Systems, Neuroscience and Cognitive Research, Ludwig-Maximilians-University Munich, Germany; Guest Professor, Peking University, China

Modern biology is guided by a principle that has been summarized by Theodosius Dobzhansky in 1973 with the memorable sentence: “Nothing in biology makes sense, except in the light of evolution.” On the basis of this conceptual frame I both generalize, but also suggest more specifically that nothing in neurobiology, psychology, the social sciences (or cognitive science in general), makes sense except in the light of synchronization, i.e., to create common time for temporally and spatially distributed sources of information or events. Without synchronization neural information processing, cognitive control, emotional relations, or social interactions would be either impossible or severely disrupted; without synchronization we would be surrounded by informational chaos, desynchronized activities, unrelated events, or misunderstandings.

Synchronization as a fundamental principle is implemented on different operating levels by temporal windows with different time constants, from the sub-second range to seconds up to days as reflected in circadian rhythms and even annual cycles. Temporal windows are both the basis for the creation of perceptual, conceptual and social identity, and they provide the necessary building blocks for the construction of experiential sequences, or for behavioral organisation. Thus, it is said: “Nothing in cognitive science makes sense except in the light of time windows.”

When referring to time windows, it is necessarily implied that information processing has to be discontinuous or discrete. Although of fundamental importance, the question of whether information processing on the neural or cognitive level is of continuous or of discrete nature has been neglected in this domain of scientific endeavor; usually and uncritically continuous processing of information is taken for granted. The implicit assumption of a continuous processing mode of information may be due to an unquestioned orientation of cognitive research and psychological reasoning towards classical physics. This is how Isaac Newton in 1686 defines time: “Absolute, true, and mathematical time, of itself, and from its own nature, flows equably without relation to anything external.” With this concept of an "equal flow, “ time serves as a uni-dimensional and continuous “container” within which events are happening at one or at another time. Does this theoretical concept of temporal continuity provide a solid conceptual background when we want to understand neural and cognitive processes? The answer is “no.” The concept of time windows speaks against such a frame of reference. Temporal processing has to be necessarily discrete to allow for efficient complexity reduction of information on different operating levels.

david_desteno's picture
Professor of Psychology at Northeastern University; Author, How God Works

If I offered to give you $20 today or $100 in a year, which would you choose? It’s a pretty straightforward question, and, truth be told, one that has a logical answer. Unless you need that $20 to ensure your near-term survival, why not wait for the bigger prize if it were sure to come. After all, when was the last time that any reputable financial institution offered an investment vehicle guaranteed to quintuple your money in 365 days? Yet, if you pose this question to the average person, you’ll be surprised to find that most will opt to take the $20 and run. Why? To understand that, and the implications it holds for decisions in many domains of life, we first have to put the framework of the decision in context.

This type of decision—one where the consequences of choices change over time—is known as an “intertemporal choice.” It’s a type of dilemma well-studied by economists and psychologists, who often facepalm at the seemingly irrational decisions people make when it comes to investing for the future, but it’s a less familiar one, at least in name, to many outside those fields. The “irrational” part of intertemporal decisions derives from the fact that humans tend to discount the value of future rewards excessively, making it difficult to part with money that could offer pleasure in the moment in order to allow it to grow and, thereby, secure greater satisfaction and prosperity in the future.

Troubling as this situation might be for your 401(k), it’s essential to recognize that the origin of intertemporal choices, and as a result, the domains to which this framework can profitably be applied, aren’t limited to financial ones. In truth, much of human social life—our morality, our relationships—revolves around challenges posed by intertemporal choice. Do I pay back a favor I owe? If not, I’m certainly ahead in the moment, but over time I’ll likely lose any future opportunities for cooperation, not only with the person I left hanging but also with any others who learn of my reputation. Should you cheat on your spouse? Although it might lead to pleasure in the short term, the long-term losses to satisfaction, assuming your marriage was a good one, are likely to be devastating. Should you spend long hours to hone a skill that would make you valuable to your team or group rather than spending a summer’s day enjoying the weather? Here again, it’s the sacrifice of pleasure in the short term than can pave the way for greater success in the long one.

It’s challenges like these—ones involving cooperation, honesty, loyalty, perseverance, and the like—that were the original intertemporal dilemmas our ancestors faced. If they weren’t willing to accept smaller benefits in the short term by being less selfish, they weren’t going to have many friends or partners with whom to cooperate and sustain themselves in the long one. To thrive, they needed to demonstrate good character, and that meant they needed self-control to put off immediate gratification.

Today, when we think about self-control, we think about marshmallows. But what is the marshmallow test really? In point of fact, it’s just a child-friendly version of a dilemma of intertemporal choice: one sweet now, or two later? And as Walter Mischel’s work showed, an ability to have patience–to solve an intertemporal choice by being future-oriented–predicted success in realms ranging from investing, to academics, to health, to social life. In short, being able to delay gratification is a marker of character. People who can will be more loyal, more generous, more diligent, and more fair. In truth, it’s because dilemmas of intertemporal choice underlie social living that they can so easily be applied to economics, not the other way around. After all, self-control didn’t evolve to help us manage economic capital; it came about to help us manage social capital.

Recognition of this fact offers two important benefits. First, it provides a framework with which to unify the study of many types of decisions. For example, the dynamics and, as a consequence, the psychological mechanisms that underlie cheating and compassion will overlap with those that underlie saving and investing. After all, sacrificing time or energy to help another will build long-term capital just as does saving money for retirement. What’s more, this decision framework is a scalable one. For example, the dilemmas posed by climate change, overfishing, and related problems of sustainability are nothing if not intertemporal at base. Solving them requires a collective willingness to forgo immediate profits (or to pay higher prices) in the short term to reap larger, communal gains in the long term.

The second benefit that comes from recognizing the broad reach of intertemporal choice is the expansion of the tool set that can be used to solve its associated dilemmas. While economists and self-control researchers traditionally emphasize using reason, willpower, and the like to overcome our inherent impatience for pleasure, realizing the intertemporal nature posed by many moral dilemmas suggests an alternate route: the moral emotions. Gratitude makes us repay our debts. Compassion makes us willing to help others. Guilt prevents us from acting in selfish ways. These moral emotions—ones intrinsically linked to social living—lead people, directly or indirectly, to value the future. They enhance our character, which when translated to behavior, means they help us to share, to persevere, to be patient, and to be diligent.

So as the new year dawns, remember that most resolutions people will make for the next 365 days and beyond will have an intertemporal aspect. Whether it’s to save more, to eat less, to be kind, or to reduce a carbon footprint, it will likely require some forbearance. And helping people to keep that forbearance going will necessitate all of us—scientists and nonscientists alike—to continue exploring the mind’s inclination for selfish, short-term temptations and its many mechanisms to overcome them in a multidisciplinary manner.

bruce_hood's picture
Chair of Developmental Psychology in Society, University of Bristol; Author, The Self-Illusion, Founder of Speakezee

The media are constantly looking for significant new discoveries to feed to the general public who want to know what controls our lives and how to better them. Cut down on salt, eat more vegetables, avoid social networking sites and so on, factors that have been reported to have significant effects on the health and welfare of individuals.

Scientists also seek significance—though it is a technical term that has a different meaning to its common usage. In science, when a discovery is highly significant, it is one that is more likely to reflect a real state of Nature rather than a random fluctuation. However, when society hears that something is significant, this is interpreted as an important finding that has major impact. The problem is that patterns can be highly significant but not very meaningful to the individual. This is where effect size comes into play—a concept that ought to be more widely known.

Calculating effect size (e.g. “Cohen's d”) involves mathematics beyond the scope of this piece but suffice to say, it considers population distributions when estimating how strong an effect is. As such, effect size, rather than significance, is a more meaningful measure of just how important a pattern really is in relation to all the other patterns that influence our lives.

Effect size is best calculated from a number of independent studies to avoid the problems inherent in limited observations. The more scientists studying a phenomenon, the better, as there is less opportunity for errors or mendacious manipulation. A meta-analysis is a study of all the studies that have sought to measure a phenomenon and is the best way to calculate effect sizes. As there are so many factors that can influence your observations—population differences, sampling errors, methodological differences and so on, meta-analysis makes sense to gather together as much evidence to estimate the effect size for a phenomenon.

Consider the reputed difference between the mathematical ability of boys and girls—a highly contentious debate that has even showcased in the pages of this publication. Meta-analyses of over 3 million children have found significant differences between boys and girls in elementary school but that the effect sizes are so small (d less than 0.05) as to be meaningless. The male advantage that does emerge later in schooling is due to factors other than gender that increasingly play a role in mathematical ability.

Humans are complex biological systems affected by a plethora of mechanisms from genetic inheritance to environmental changes that interact in ways that vary from one individual to another and are too complex to map or predict. In an effort to isolate mechanisms, scientists often strip away extraneous variables to reveal the core factors under investigation, but in doing so, create a false impression of the true influence of the mechanism in relation to all the others that play a role. What they find may be significant, but effect size tells you whether it is meaningful.

lee_smolin's picture
Physicist, Perimeter Institute; Author, Einstein's Unfinished Revolution

Leibniz was famously satirized by Voltaire, in his play Candide, as saying that ours is the best of all possible worlds. While that played well on the stage, what Leibniz actually wrote, in 1714, in his Monodology, was a good deal more interesting. He did argue that God chose the one real world from an infinitude of possible worlds, by requiring it to have “as much perfection as possible.” But what is often missed is how he characterized degrees of perfection. Leibniz defined a world with “as much perfection as possible” to be one having “as much variety as possible, but with the greatest order possible.”

I believe that Leibniz’s insight of a world that optimizes variety, subject to “the greatest order possible”, is a powerful concept that could be helpful for current work in biology, computer science, neuroscience, physics and numerous other domains including social and political theory and urban planning.

To explain why, I have to define variety. I believe we ought to see variety as a measure of complexity which applies to systems of relationships. These are systems of individual units, which each have a unique set of interactions or relationships with the other units in the system. Leibniz saw the universe as just such a system of relationships. In a Leibnizian world, an object’s properties are not intrinsic to it-rather they reflect the relationships or interactions that object has with other objects.

Systems of relationships are often visualized as graphs or networks. Each element is represented by a node and two nodes are related when they are connected by a line. We know of a great many systems, natural and artificial, that can be represented by such a network. These include ecosystems, economies, the internet, social networks etc.

What does it mean for such a relational world to have a high variety. I would argue that variety is a measure of how unique is each element’s role in the network. In a Liebnizian world, each element has a view of the rest which summarizes how it is related to the other elements. An element’s view tells us what the whole system looks like from its point of view. Variety is a measure of how distinct these different views are.

Leibniz expressed this almost poetically.

"This interconnection (or accommodation) of all created things to each other, brings it about that each simple substance has relations that express all the others, and consequently, that each simple substance is a perpetual, living mirror of the universe.”

He then reaches for a striking metaphor, of a city.

“Just as the same city viewed from different directions, appears entirely different and, as it were, multiplied perspectively, in just the same way it happens that, because of the infinite multitude of simple substances, there are, as it were, just as many different universes, which are, nevertheless, only perspectives on a single one,”

One can almost hear Jane Jacobs in this, when she praises a good city as one with many eyes on the street.

A system of relations can, I believe, be said to have its maximal variety when the different views are maximally distinct from each other.

A city has low variety if the views from many of the houses are similar. A city has high variety if it is easy to tell, just by looking out the window, which street you are on.

An ecosystem is a system of relations, such as who eats who. A niche is a situation characterized by what you eat and who eats you. An economy is a system of relations including who buys what from who. The variety of an ecosystem is a measure of the extent which each species has a unique niche. The variety of an economy measures the uniqueness of each firm’s role in the market.

It is commonly asserted that ecosystems and economies evolve to higher degrees of complexity. But to develop these ideas we need a precise notion of complexity. Negative entropy does not suffice to measure complexity because the network of chemical reactions in our bodies and a regular lattice both have low entropy. We want a measure of complexity that recognizes that chemical reaction networks are far more complex than either random graphs or regular lattices.

I would suggest that variety is a very helpful notion of complexity, because it distinguishes the truly complex from the regular. In a high variety network it is easy to know where you are from looking around at your neighbourhood. In other words, the less information you need about the neighbourhoods to distinguish each node from the rest, the higher the variety. This captures a notion of complexity distinct from and, perhaps, more useful than, negative entropy.

Having defined variety, we can go back and try to imagine what Leibniz meant by maximizing variety but “with the greatest order possible.” Order can mean subject to law. Can there then be a law of maximal variety?

Such a law might be emergent in a complex system such as an economy or ecology. This might arise as follows: when the variety is maximal, the network is most efficient because there is a maximum amount of cooperation and a minimum of redundancy. Entities compete not to dominate a single niche, but to invent new ways to cooperate by inventing new niches, which have a novel interrelation to the rest.

jason_wilkes's picture
Graduate student in Psychology, UC Santa Barbara; Author, Burn Math Class

Where does mathematics come from? I'm not talking about the philosophical question of whether mathematical truths have an existence independent of human minds. I mean concretely. What on earth is this field? When a mathematician makes cryptic pronouncements like, "We define the entropy of a probability distribution to be such-and-such," who or what led them to explore that definition over any other? Are they accepting someone else's definition? Are they making up the definition themselves? Where exactly does this whole dance begin? 

Mathematics textbooks contain a mixture of the timeless and the accidental, and it isn't always easy to tell exactly which bits are necessary, inevitable truths, and which are accidental social constructs that could have easily turned out differently. An anthropologist studying mathematicians might notice that, for some unspecified reason, this species seems to prefer the concept of "even numbers" over the barely distinguishable concept of "(s?)even numbers," where a "(s?)even number" is defined to be a number that's either (a) even, or (b) seven. Now, when it comes to ad hoc definitions, this example is admittedly an extreme case. But in practice not all cases are quite this clear cut, and our anthropologist still has an unanswered question: What is it that draws the mathematicians to one definition and not the other? How do mathematicians decide which of the infinity of possible mathematical concepts to define and study in the first place?

The secret is a piece of common unspoken folk-knowledge among mathematicians, but being an outsider, our anthropologist had no straightforward way of discovering it, since for some reason the mathematicians don't often mention it in their textbooks.

The secret is that, although it is legal in mathematics to arbitrarily choose any definitions we like, the best definitions aren't just chosen: they're derived.

Definitions are supposed to be the starting point of a mathematical exploration, not the result. But behind the scenes, the distinction isn't always so sharp. Mathematicians derive definitions all the time. How do you derive a definition? There's no single answer, but in a surprising number of cases, the answer turns out to involve an odd construct known as a functional equation. To see how this happens, let's start from the beginning.

Equations are mathematical sentences that describe the behaviors and properties of some (often unknown) quantity. A functional equation is just a mathematical sentence that says something about the behaviors—not of an unknown number—but an entire unknown function. This idea seems pretty mundane at first glance. But its significance becomes more clear when we realize that, in a sense, what a mathematical sentence of this form gives us is a quantitative representation of qualitative information. And it is exactly that kind of representation that is needed to create a mathematical concept in the first place.

When Claude Shannon invented information theory, he needed a mathematical definition of uncertainty. What's the right definition? There isn't one. But, whatever definition we choose, it should act somewhat like our everyday idea of uncertainty. Shannon decided he wanted his version of uncertainty to have three behaviors. Paraphrasing heavily: (1) Small changes to our state of knowledge only cause small changes to our uncertainty (whatever we mean by "small"), (2) Dice with more sides are harder to guess (and the dice don't actually have to be dice), and (3) If you stick two unrelated questions together (e.g., "What's your name" and "Is it raining?") your uncertainty about the whole thing should just be the first one's uncertainty plus the second one (i.e., independent uncertainties add). These all seem pretty reasonable.

In fact, even though we're allowed to define uncertainty however we want, any definition that didn't have Shannon's three properties would have to be at least a bit weird. So Shannon's version of the idea is a pretty honest reflection of how our everyday concept behaves. However, it turns out that just those three behaviors above are enough to force the mathematical definition of uncertainty to look a particular way.

Our vague, qualitative concept directly shapes the precise, quantitative one. (And it does so because the three English sentences above are really easy to turn into three functional equations. Basically just abbreviate the English until it looks like mathematical symbols.) This is, in a very real sense, how mathematical concepts are created. It's not the only way. But it's a pretty common one, and it shows up in the foundational definitions of other fields too.

It's the method Richard Cox used to prove that (assuming degrees of certainty can be represented by real numbers) the formalism we call "Bayesian Probability theory" isn't just one ad hoc method among many, but is in fact the only method of inference under uncertainty that reduces to standard deductive logic in the special case of complete information, while obeying a few basic qualitative criteria of rationality and internal consistency.

It's the method behind the mysterious assertions you might have heard if you've ever spent any time eavesdropping on economists: A "preference" is a binary relation that satisfies such-and-such. An "economy" is an n-tuple behaving like etcetera. These statements aren't as crazy as they might seem from the outside. The economists are doing essentially the same thing Shannon was. It may involve functional equations, or it may take some other form, but in every case it involves the same translation from qualitative to quantitative that functional equations so elegantly embody.

The pre-mathematical use of functional equations to derive and motivate our definitions exists on a curious boundary between vague intuition and mathematical precision. It is the DMZ where mathematics meets psychology. And although the term "functional equation" isn't nearly as attention grabbing as the underlying concepts deserve, they offer valuable and useful insights into where mathematical ideas come from.

simon_baron_cohen's picture
Professor of Developmental Psychopathology, University of Cambridge; Fellow, Trinity College, Cambridge; Director, Autism Research Centre, Cambridge; Author, The Pattern Seekers

George Boole, the son of a shoemaker, left school at sixteen, and ended up a Professor of Mathematics at Queens College in Cork in Ireland. As the only breadwinner in his family, he became a teacher at age sixteen, opened his own school in Lincoln in England by age nineteen, and fifteen years later was a mathematician and philosopher of logic. Why did Google celebrate his 200th birthday on the 2nd November 2015? And why should we remember his contributions?

In 1854 he wrote a book called An Investigation of the Laws of Thought which distilled the essence of logical thought down to terms like AND (e.g., x AND y), OR (e.g., x OR y), and NOT (e.g., NOT x), or combinations of these. This took Aristotelian logic (the syllogism, expressed in words) and insisted these be formulated as equations, which was a revolutionary step. Equally important, Boolean logic is today seen as the foundations of the "information age," or what we also call the "computer age." This is because each "value" in these logical statements or equations reduces down to either being true or false, with zero ambiguity. The logic is binary. No wonder they could be applied, more than a century later, in the design of electronic circuits in computers.

For me, the importance of Boolean logic even goes beyond its contribution to the logic of computers. Each of his terms is an "operation," like the familiar mathematical ones of addition and multiplication. These terms describe what happens if you take something as input, and perform an operation on it. You end up with an output. This is at the core of what I call "systemizing": take input, perform an operation, and observe the output. Boole, without realizing it, was describing how the human mind systemizes. In this way, he anticipated how to describe a uniquely human aspect of cognition, one that enables humans to do engineering (designing a system) and to innovate (changing a system).

Consider a much discussed example: Vinton Cerf, co-inventor of the Internet, was pouring peppercorns into a funnel and found if he dropped handfuls of peppercorns into it, the funnel got blocked or congested. Nothing came out. But if he poured the peppercorns in one a time, they didn’t get stuck, they flowed out smoothly. In system 1, the input is a handful of peppercorns, the operation is pouring them into a funnel, and the output is disappointingly nothing! In system 2, the input is one-peppercorn-at-a-time, the operation remains the same, but now the output is a pleasing flow of peppercorns. There are lessons here for how we as humans design not just pepper grinders, but also traffic systems (that either cause or avoid congestion), or how we design the post office, to cope with a volume of letters. Indeed, Boolean logic allows us to describe not just how an engineer designs any system, but how we humans systemize.

Here’s just one more legacy of Boolean logic. We know that people with autism have very logical minds, and have a strong drive to systemize. If we can generalize, they have a preference for information that can be systemized (such as factual information, or repeating, lawful patterns of information that doesn’t change unlawfully). They don’t cope well with information that is hard to systemize, because it contains ambiguity, or because it changes unexpectedly (such as social interaction, where what people do or say is rarely the same, except in highly ritualized contexts). Our modern understanding of autism as a hyper-systemizing mind owes a huge debt to Boolean logic that enables us to characterize the beauty and the extraordinary power of binary thinking, and also where such thinking is best used.

jaeweon_cho's picture
Professor of Environmental Engineering, UNIST; Director, Science Walden Center

It is said in the Doctrine of the Mean, written by the grandson of Confucius, that the greatest knowledge, including both scientific concepts and human realizations, comes only from the everyday lives with an empty mind. The title of the book, Doctrine of the Mean, does not mean the middle way of the two extremes but the emptiness of the mind in everyday life. We call it as Moderation. This teaching of the Doctrine of the Mean connects its essence to Zarathustra by Nietzsche, as Heidegger explains in his book, What is Thinking. They all stressed that we cannot think without emptying our minds in the everyday lives. If one can think, one can do everything: Confucius' grandson called it a scholar or scientist, while Nietzsche called it Zarathustra.

After crossing the river with a raft, we have to abandon the raft to climb the mountain. Existing knowledge helps but guides us too much that it hinders us from obtaining newly formed representations to become the new corresponding concepts. When we look at a red flower, we may remind ourselves of the refraction of the color rather than think about the role of the color red for the flower to survive in nature. When we find a natural phenomenon with microorganisms, we may rely on genetic information, such as functional genes, rather than stick to the phenomenon as it is. However, when we observe an object or experience an event, we have the capability to create the subsequent representation which brings a new knowledge only when our mind is empty.

How to empty the mind filled with knowledge? The discourses of Confucius advise the two ways. Firstly, we can empty our mind by empathizing into the object (either nature or human) that we observe. When we meet a person who suffers, we may bring our mind into the mind of the suffering person, which is called empathy, instead of reminding ourselves of a social welfare system or related knowledge. When we visit and observe a river polluted with algae, we may bring the knowledge of eutrophication with nitrogen and phosphorus to explain it. Ironically, however, it is not easy to go beyond the thoughts of a scientist with this knowledge. Only when we bring ourselves into the polluted river, we may have a way to obtain new knowledge over the existing ones.

Secondly, we can empty our mind by thinking about justice whenever we have a chance to benefit ourselves, which surely applies to the scientist. We may understand this with the issues of climate change. Confucius asks us whether we want to solve emergent problems of climate change or take benefits from those. Confucius mentions that the consequences of empathy and act of justice, with an event, can become knowledge. But, we have to know the knowledge is ephemeral as it occurs only regarding the event. That is why Confucius never defines any knowledge but only tells examples from which we can understand the related knowledge.

robert_sapolsky's picture
Neuroscientist, Stanford University; Author, Behave

I’m voting for this concept, one that is so central to the scientific process, so much a given, that hardly any scientist ever actually speaks those words.

Scientists present their work—say, “We manipulated Variable X, and observed that this caused Z to happen,” or “We measured this and found that it takes Z amount of time to happen.” And when they do, most of the time what they’re actually saying is, “We manipulated Variable X, and observed that, on the average, this caused Z to happen.” “We measured this and found that, on the average, it takes Z amount of time to happen.” Everyone knows this.

Of course. Everyone in a population doesn’t have the exact same levels of something or other in their bloodstream. A causes B to happen most of the time, but not every single time. There’s variability.

When scientists present their data, they typically display the average—the mean, the X on a graph, the bar of a particular height in the figure. And it always comes with an “error term—a measure of how much variability there was around that average, a measure of how much confidence there is in saying “on the average.” Measure something or other in three people, observe values of 99, 100, and 101, producing an average of 100; measure that something in three people and observe values of 50, 100, and 150, producing an average of 100. This are two very different circumstance; “the average was 100,” tells you a lot more about how some sliver of the universe works in the first case than the second.

This is how scientists in most disciplines go about their business, with the recognition that you’re always seeing how things work on the average. So why is it important that this be more widely known? I can think of at least three reasons, of increasing importance.

First, this should constitute a big shout-out to scientists and the scientific process. Perhaps counter to the general perception, scientists don’t pronounce upon some fact that they have discovered; they pronounce upon the temporary way station of statistical confidence that they have discovered. “On the average” means that there’s stuff you can’t explain yet, and there’s even the possibility that you’re entirely wrong. It’s a badge of the humility that defines science, when things are working right. And it sure couldn’t hurt if that sort of humility became more commonplace in lots of other settings.

The second reason is that the variability around an average is usually much more interesting than the average itself. “More interesting” in the scientific sense; if variability means that there’s stuff you can’t explain yet, it’s also the guide to where to look to understand things more, to identifying previously unappreciated factors that give rise to the variability.

For example, “On the average, having a particular variant of a gene produces a particular behavior in people….unless, as it then turned out with additional research, someone had a particular type of childhood.” Pursuing the question of why there were exceptions revealed all sorts of things about environmental regulation of gene transcription, child development, gene/environment interactions, and so on.

Moreover, variability is often more interesting in the human sense as well. All things considered, it’s not that exciting that humans average a score of 100 on this thing called an IQ test. It’s the variation that interests us. Or that, on the average, adult humans males can run 100 meters in, say, 25 seconds. It’s Usain Bolt that gets our attention. Or that there’s an average life expectancy—it’s what you and your loved ones are destined for that matters. Crowds of protestors don’t gather in some nation’s capital because of the average income in that country; it’s because of the magnitude of the variance, the extent of inequality.

The third reason is the most important and subtle, and ultimately has little to do with science. Take a population of people. Figure out their average height. Their average weight. Average IQ, shoe size, number of friends, hip/waist ratio, radiant brightness of smile, symmetry of face, athletic skill, sex drive, scores on psych instruments that measure perkiness or optimism or gumption. Define your average human across these parameters. And then good luck trying to find such a person. Because they don’t exist. Even if someone seems to be, say, the average weight, they won’t really be if you look closely enough, measuring things out to the level of grams, or milligrams, or micrograms, or…. Nothing and no one is precisely average, because “averageness” is an emergent property of populations, an artificial construct. It’s like a strange attractor in chaotic systems, which oscillates around a singular point, a hypothetical average that, no matter how closely you look, is never actually achieved. Oh, it does “on the average,” but never in reality.

This matters because, psychologically, we tend to morph “average” into “the norm” and then into “normal” or “ideal.” And what that means is that we all always come up short in achieving what we have labeled as normal and ideal; we’re all a little too heavy, or too tall, with a nose that’s a little too much this, a personality that’s a bit too little that. We all deviate from the norm, from something that is an artificial, statistical construct that does not really exist. We all are “abnormal,” in a sense that’s more pejorative than statistical. And thus feel badly about who we are. What “on the average” truly means in populations is liberating.

terrence_j_sejnowski's picture
Computational Neuroscientist; Francis Crick Professor, the Salk Institute; Investigator, Howard Hughes Medical Institute; Co-author (with Patricia Churchland), The Computational Brain

In the 20th century, we gained a deep understanding of the physical world using equations and the mathematics of continuous variables as the chief source of insights. A continuous variable varies smoothly across space and time. Unlike the simplicity of rockets, which follow Newton’s laws of motion, there isn’t a simple way to describe a tree. In the 21st century, we are making progress understanding the nature of complexity in computer science and biology based on the mathematics of algorithms, which often have discrete rather continuous variables. An algorithm is a step-by-step recipe that you follow to achieve a goal, not unlike baking a cake. 

Self-similar fractals grow out of simple recursive algorithms that create patterns resembling bushes and trees. The construction of a real tree is also an algorithm, driven by a sequence of decisions that turn genes on and off as cells divide. The construction of brains, perhaps the most demanding construction project in the universe, is also guided by algorithms embedded in the DNA, which orchestrate the development of connections between thousands of different types of neurons in hundreds of different parts of the brain. 

Learning and memory in brains is governed by algorithms that change the strengths of synapses between neurons according to the history of their activity. Learning algorithms also have been used recently to train deep neural network models to recognize speech, translate between languages, caption photographs and play the game of Go at championship levels. These are surprising capabilities that emerge from applying the same simple learning algorithms to different types of data.

How common are algorithms that generate complexity? The Game of Life is a cellular automaton that generates objects that seem to have lives of their own. Stephen Wolfram wanted to know the simplest cellular automaton rule that could lead to complex behaviors and so he set out to search through all of them. The first twenty-nine rules produced patterns that would always revert to boring behaviors: All the nodes would end up with the same value, fall into an infinitely repeating sequence or endless chaotic change. But rule thirty dazzled with continually evolving complex patterns. It was even possible to prove that rule thirty was capable of universal computation; that is, the power of a Turing machine that can compute any computable function. 

One of the implications of this discovery is that the remarkable complexity we find in nature could have evolved by sampling the simplest space of chemical interactions between molecules. Complex molecules should be expected to emerge from evolution and not be considered a miracle. However, cellular automata may not be a good model for early life, and it remains an open question to explore what simple chemical systems are capable of creating complex molecules. It might be that only special biochemical systems have this property, and this could help narrow the possible set of interactions from which life could have originated. Francis Crick and Leslie Orgel suggested that RNA might have these properties, which led to the concept of an RNA world before DNA appeared early in evolution.

How many algorithms are there? Imagine the space of all possible algorithms. Every point in the space is an algorithm that does something. Some are amazingly useful and productive. In the past these useful algorithms were hand crafted by mathematicians and computer scientists working as artisans. In contrast, Wolfram found cellular automata that produced highly complex patterns by automated search. Wolfram’s law states that you don’t have to travel far in the space of algorithms to find one that solves an interesting class of problems. This is like sending bots to play games like StarCraft on the Internet that try all possible strategies. According to Wolfram’s law, there should be a way to find algorithms somewhere in the universe of algorithms that can win the game. 

Wolfram focused on the simplest algorithms in the space of cellular automata, a small subspace in the space of all possible algorithms. We now have confirmation of Wolfram’s law in the space of neural networks, which are some of the most complex algorithms ever devised. Each deep learning network is a point in the space of all possible algorithms and was found by automated search. For a large network and a large set of data, learning from different starting points can generate an infinite number of networks roughly equally good at solving the problem. Each data set generates its own galaxy of algorithms, and data sets are proliferating.

Who knows what the universe of algorithms holds for us? There may be whole galaxies of useful algorithms that humans have not yet discovered but can be found by automated discovery. The 21st century has just begun.

howard_gardner's picture
Hobbs Professor of Cognition and Education, Harvard Graduate School of Education; Author, A Synthesizing Mind

Over the centuries, reflective individuals have speculated about the causes of differences among individuals (who is talented and why, what makes certain persons influential) and among societies (why have certain cultures thrived and others vanished; which societies are bellicose and why). In the 19th century, scholars began systematic study of such issues; work in this vein has continued and expanded. This line of study has been dubbed historiometrics or, alternatively, cliometrics. These terms, while literally and etymologically appropriate, are hardly transparent or snappy—among the reasons that historiometric scholars and approaches have yet to receive due credit.

Among continental scholars, the Belgian statistician Adolphe Quetelet is often credited with opening up this line of work. But in the Anglo-American scholarly community, the British polymath Francis Galton is generally seen as the patron saint of historiometric studies. As a member of the distinguished Darwin family, Galton was particularly interested in the nature and incidence of genius. Using statistical methods, he demonstrated how genius thrived in certain families and attributed this distribution to hereditary factors. Yet Galton was also sensitive to the possible confounds between hereditary and environmental contributions; and so he pioneered in comparisons of identical twins, fraternal twins, and other members of the same family. On both sides of the English channel, historiometric studies had been launched!

In the 20th century, these lines of work, whether or not officially labeled as historiometry, continued in many locales. Deserving special mention is the American psychologist, Dean Keith Simonton, who has devoted several decades to historiometric study—carrying out dozens of intriguing studies as well as explicitly laying out the methods available in the armamentarium of the historiometric scholar. If you are speculating about the kinds of issues alluded to above—for example, during which decade of life do scholars in specific domains carry out their most influential research, during which decade of life do artists create their most enduring works—chances are that Simonton has already carried out relevant research; and, if not, he will readily suggest how such studies might be conducted and their results interpreted.

I’ve often wondered why Simonton’s work is not more widely known and appreciated. I suspect it’s because the work spans standard social science (psychology, sociology) and humanistic studies (history, the arts)—and most scholars work comfortably within, rather than across, these two cultures. It’s also notable that Simonton has worked largely alone—with neither a big staff nor a large research budget—and that is an unusual research profile in our time.

Enter big data. Neither Quetelet nor Galton had significant computational aids—pencils and paper on one’s desk were the media of choice. When Simonton began his studies in the 1970s, we were in the era of large mainframes, punch cards, and limited computing power. To be sure, Simonton (and other self-styled cliometricians like Charles Murray) has kept up with advances in technology. But only in the last decade or so has it become possible, indeed easy, to pursue historiometric puzzles, drawing on vast amounts of data that are sitting on one’s lap, or, more precisely, on one’s laptop.

Occasionally, historiometric findings have found their way into the mainstream media. A few months ago, a team of researchers led by Roberta Sinatra introduced the Q phenomenon. Curious about the distribution of influential work over the investigative lifespan of productive scientists, the researchers examined publication records of scientists drawn from seven disciplines. And they discovered—presumably to their and others’ surprise—that “the highest-impact work in a scientist’s career is randomly distributed within her body of work.” Many commentators, prominent among them Simonton, have reacted to this claim and it’s safe to say that we have not heard the last of the Q phenomenon.

Has the moment for historiometry finally arrived? Given the fascination of historiometric questions, and the relative ease these days of researching them, what was once an exotic exercise of eccentric European savants can now become a regular part of the disciplinary terrain.

I am not quite persuaded that the moment has arrived. To be sure, the availability of vast sources of data and powerful data mining techniques have greatly enhanced the "metric" part of historiometrics. Yet, the "historio" part is equally important. Historians should be judged by the quality of the questions that they raise, and the sense that they are able to make of what they have uncovered. These are issues of judgment, not merely issues of measurement. (As has long been quipped, "garbage in, garbage out.") And so, to return to the Q phenomenon, the unexpected findings of the Sinatra team open up a slew of possible explanations and interpretations. But the available data themselves will never tell us which issues to pursue next—one needs a solid historical sense as well as a dollop of historical humility. And whether the Sinatra team—or some other team or individual—will itself significantly raise the stock of historiometry depends on its historical wisdom as well as its data analytic prowess.

There may be broader lessons here. Scholarship—whether scientific or humanistic—always entails a dialectic between issues worth pursuing and the methods available for pursuing them.

Historiometric curiosity dates back to Classical times; but it has been the advent of measurement techniques (statistics, data analytics) that has permitted this curiosity to be pursued with increasing power and elegance. It’s desirable to maintain a balance between questions/curiosity/judgment, on the one hand, and analytic measures, on the other: when either becomes dominant, the pursuit itself can be compromised.

And for extra credit: when you want to explain to others what you are up to, find a succinct and memorable descriptor!

bart_kosko's picture
Information Scientist and Professor of Electrical Engineering and Law, University of Southern California; Author, Noise, Fuzzy Thinking

Negative evidence is a concept that deserves greater currency in the intellectual trades and popular culture. Negative evidence helps prove that something did not occur. University registrars routinely use negative evidence when they run a transcript check to prove that someone never got a degree at their university.

Negative evidence is the epistemic dual to positive evidence. It is just that evidence that tends to prove a negative. So it collides headfirst with the popular claim that you cannot prove a negative. A more sophisticated version of the same claim is that absence of evidence is not evidence of absence.

Both claims are false in general.

It may well be hard to prove a negative to a high degree. That does not diminish the probative value of doing so. It took the invasion and occupation of Iraq to prove that the country did not have weapons of mass destruction. The weapons may still turn up someday. But the probability of finding any has long since passed from unlikely to highly unlikely. The search has simply been too thorough.

Absence of evidence can likewise give some evidence of absence. A chest CAT scan can give good image-based negative evidence of the absence of lung cancer. The scan does not prove absence to a logical certainty. No factual test can do that. But the scan may well prove absence to a medical certainty. That depends on the accuracy of the scan and how well it searches the total volume of lung tissue.

The CAT-scan example shows the asymmetry between positive and negative evidence. The accurate scan of a single speck of malignant lung tissue is positive evidence of lung cancer. It may even be conclusive evidence. But it takes a much larger scan of tissue to count as good negative evidence for the absence of cancer. The same holds for finding just one weapon of mass destruction compared with searching for years and finding none.

Simple probability models tend to ignore this asymmetry. They have the same form whether the evidence supports the hypothesis or its negation.

The most common example is Bayes theorem. It computes the conditional probability of a converse in terms of a hypothetical condition and observed evidence. The probability that you have a blood clot given a high score on a D-dimer blood test differs from the probability that you would observe such a test score if you in fact had a blood clot. Studies have shown that physicians sometimes confuse these converses. Social psychologists have even called this the fallacy of the inverse or converse. Bayes theorem uses the second conditional probability (of observing the evidence given the hypothetical condition) to help compute the first probability (of having the hypothetical condition given the observed evidence). The simple ratio form of the Bayes calculation ensures that a symmetric ratio gives the probability that the hypothesis is false given the same observed evidence. The numerical values may differ but the ratio form does not.

The law is more discerning with negative evidence.

Courts often allow negative evidence if the proponent lays a proper foundation for it. The opponent must also not have shown that the lack of evidence involved something improper or untrustworthy. A convenience store’s videotapes of a robbery can give compelling negative evidence that a suspect did not take an active part in the robbery. But the video recordings would have to cover a sufficient portion of the store and its parking lot. The cameras must also have run continuously throughout the robbery.

Federal Rule of Evidence 803 uses a telling phrase to describe when a proponent can use a public record as negative evidence that a prior conviction or other event did not occur. The rule demands a “diligent search” of the public records.

Diligent search is the key to negative evidence.

It is easy to conduct a diligent search to prove the negative that there is not a five-carat diamond in your pocket. It takes far more effort to conduct a diligent search for the diamond in a room or building or in an entire city.

The strength of negative evidence tends to fall off quickly with the size of the search area. That is why we cannot yet technically rule out the existence in the deep seas of kraken-like creatures that can drag ships down to their doom. The ocean contains more than three hundred million cubic miles of water. High-resolution sonar has mapped only a fraction of it. That mapping itself has only been a snapshot and not an ongoing movie. It is far from diligent search of the whole volume.

Search size also justifies patience in the search for extra-terrestrial intelligence. Satellites have only recently mapped the surface of Mars. Radio telescopes have searched only a minuscule fraction of the expanding universe for some form of structured energy in interstellar radio signals.

Good negative evidence that we are alone could take thousands or millions of years of diligent search. Positive evidence to the contrary can come at any second.

martin_rees's picture
Former President, The Royal Society; Emeritus Professor of Cosmology & Astrophysics, University of Cambridge; Fellow, Trinity College; Author, From Here to Infinity

An astonishing concept has entered mainstream cosmological thought: it deserves to be more widely known. Physical reality could be hugely more extensive than the patch of space and time traditionally called "the universe." Our cosmic environment could be richly textured, but on scales so vast that our astronomical purview is restricted to a tiny fraction: we’re not aware of the "grand design," any more than a plancton whose "universe" was a spoonful of water would be aware of the world’s topography and biosphere. We may inhabit a "multiverse."

However powerful our telescopes are, our vision is bounded by a horizon: a shell around us, delineating the distance light can have travelled since the big bang. But this shell has no more physical significance than the circle that delineates your horizon if you're in the middle of the ocean. There are billions of galaxies within our horizon. But we expect far more galaxies lying beyond. We can’t tell just how many. If space stretched far enough, then all combinatorial possibilities would be repeated. Far beyond the horizon, we could all have avatars. And it may be some consolation that when we make a bad decision, there’s another one of us, far beyond our horizon, who has made a better one.

So the aftermath of "our" big bang could encompass a stupendous volume. But that’s not all. "Our" big bang could be just one island of space-time in an unbounded cosmic archipelago. A challenge for 21st-century physics is to answer two questions. First, are there many "big bangs" rather than just one? Second—and this is even more interesting—if there are many, are they all governed by the same physics? Or is there a huge number of different vacuum states—each the arena for different microphysics, and therefore offering differing propensities for spawning life?

If the answer to this latter question is "yes" there will still be overarching laws governing the multiverse—maybe a version of string theory. But what we’ve traditionally called the laws of nature will be just local bylaws. Even though it makes some physicists foam at the mouth, we then can’t avoid the A-word—“anthropic." Many domains could be still-born or sterile: the laws prevailing in them might not allow any kind of complexity. We therefore wouldn’t expect to find ourselves in a typical universe. Ours would belong to the unusual subset where there was a "lucky draw" of cosmic numbers conducive to the emergence of complexity and consciousness. Its seemingly designed or fine-tuned features wouldn't be surprising.

Some claim that unobservable entities aren’t part of science. But it’s hard to defend that hard-line view. For instance, (unless we are in some special central position and our universe has an "edge" just beyond the present horizon) there will be some galaxies lying beyond our horizon—and if the cosmic acceleration continues they will remain beyond forever. Not even the most conservative astronomer would deny that these never-observable galaxies (which, as I’ve already mentioned, could hugely outnumber those we can see) are part of physical reality. These galaxies are part of the aftermath of our own big bang. But why should they be accorded higher epistemological status than unobservable objects that are the aftermath of other big bangs? So it’s surely a genuine scientific question to ask whether there’s one big bang or many.

Fifty years ago we weren’t sure whether there was a big bang at all (Fred Hoyle and other "steady statesmen" still contested the idea). Now we can confidently describe cosmic history back to the ultra-dense first nanosecond. So in fifty more years, it’s not overoptimistic to hope that we may have a "unified" physical theory, corroborated by experiment and observation in the everyday world, that tells us what happened in the first trillionth of a trillionth of a trillionth of a second, when inflation is postulated to have occurred. If that theory predicts multiple big bangs we should take that prediction seriously even though it can’t be directly verified (just as we take seriously general relativity’s predictions for the unobservable insides of black holes, because the theory has survived many tests in domains we can observe).

Some physicists don’t like the multiverse: they’d be disappointed if some of the key numbers they are trying to explain turn out to be mere environmental contingencies governing our local space-time patch—no more truly "fundamental" than the parameters of the Earth’s orbit round the Sun. But that disappointment would surely be outweighed by the revelation that physical reality was grander and richer than hitherto envisioned. In any case our preferences are irrelevant to the way physical reality actually is—so we should surely be open-minded.

Indeed, there’s an intellectual and aesthetic upside. If we’re in a multiverse, it would imply a fourth and grandest Copernican revolution; we’ve had the Copernican revolution itself, then the realization that there are billions of planetary systems in our galaxy; then that there are billions of galaxies in our observable universe. But now that’s not all. The entire panorama that astronomers can observe could be a tiny part of the aftermath of "our" big bang, which is itself just one bang among a perhaps-infinite ensemble.

We may, by the end of this century, be able to say, with confidence, whether we live in a multiverse, and how much variety its constituent "universes" display. The answer to this question will determine how we should interpret the "bio friendly" universe in which we live.

daniel_lieberman's picture
Professor of Human Evolutionary Biology, Harvard University; Author, Exercised

Assuming that you fear getting sick and dying, you really ought to think more about Mismatch Conditions. They are also a fundamental evolutionary process.

Mismatch conditions are problems, including illnesses, that are caused by organisms being imperfectly or inadequately adapted to novel environmental conditions. As extreme examples, a chimpanzee adapted to the rainforests of Africa would be hopelessly mismatched in Siberia or the Sahara, and a hyena would be mismatched to a diet of grass or shrubs. Such radical mismatches almost always cause death and sometimes cause extinction. 

Mismatches, however, are typically more subtle and most commonly occur when climate change, dispersal or migration alters a species’ environment, including its diet, predators, and more. Natural selection occurs when heritable variations to these sorts of mismatches affect offspring survival and reproduction. For instance, when tropically adapted humans who evolved in Africa dispersed to temperate habitats such as Europe about 40,000 years ago, selection acted rapidly in these populations to favor shifts in body shape, skin pigmentation and immune systems that lessened any resulting mismatches.

Although mismatches have been going on since life first began, the rate and intensity of mismatches that humans now face has been magnified thanks to cultural evolution, arguably now a more rapid and powerful force than natural selection. Just think how radically our body’s environments have been transformed because of the agricultural, industrial and post-industrial revolutions in terms of diet, physical activity, medicine, sanitation, even shoes. While most of these shifts have been beneficial in terms of survival and reproduction, everything comes with costs, including several waves of mismatch diseases.

The first great wave of mismatches was triggered by the origins of farming. As people transitioned from hunting and gathering to farming, they settled down in large, permanent communities with high population densities, not to mention lots of sewage, farm animals and various other sources of filth and contagion. Farmers also became dependent on a few cereal crops that yield more calories but less nutrition than what hunter-gatherers can obtain. The resulting mismatches included all sorts of nasty infectious diseases, more malnutrition, and a greater chance of famine.

A second great wave of mismatch, still ongoing, occurred from the industrial and then post-industrial revolutions. The standard description of this shift, generally known as the epidemiological transition, is that advances in medicine, sanitation, transportation, and government vastly decreased the incidence of the communicable diseases and starvation, thus increasing longevity and resulting in a concomitant increase in chronic non-infectious diseases. According to this logic, as people became less likely to die young from pneumonia, tuberculosis or the plague, they became more likely to die in old age from heart disease and cancer—now the cause of two out of three deaths in the developed world. The epidemiological transition is also thought to be responsible for other diseases of aging such as osteoporosis, osteoarthritis and type 2 diabetes.

The problem with this explanation is that aging is not a cause of mismatch, and we too often confuse diseases that occur more commonly with age with diseases that are actually caused by aging. To be sure, some diseases like cancers are caused by mutations that accrue over time, but the most common age of death among hunter-gatherers who survive childhood is between sixty-eight and seventy-eight, and studies of aging among hunter-gatherers and subsistence farmers routinely find little to no evidence of so-called diseases of aging such as hypertension, coronary heart disease, osteoporosis, diabetes, and more. Instead, these diseases are mostly caused by recent environmental changes such as physical inactivity, highly processed diets and smoking. In other words, they are primarily novel mismatch diseases caused by industrial and post-industrial conditions.

In short there are three reasons you should pay attention to the concept of mismatch. First, mismatches are a powerful evolutionary force that always have and always will drive much selection. Second, you are most likely to get sick and then die from a mismatch condition. And, most importantly, mismatches are by nature partly or largely preventable if you can alter the environments that promote them.  

scott_aaronson's picture
David J. Bruton Centennial Professor of Computer Science, University of Texas at Austin; Author, Quantum Computing Since Democritus

In physics, math, and computer science, the state of a system is an encapsulation of all the information you'd ever need to predict what it will do, or at least its probabilities to do one thing versus another, in response to any possible prodding of it. In a sense, then, the state is a system’s “hidden reality,” which determines its behavior beneath surface appearances. But in another sense, there’s nothing hidden about a state—for any part of the state that never mattered for observations could be sliced off with Occam’s Razor, to yield a simpler and better description.

When put that way, the notion of “state” seems obvious. So then why did Einstein, Turing, and others struggle for years with the notion, along the way to some of humankind’s hardest-won intellectual triumphs?

Consider a few puzzles:

  1. To add two numbers, a computer clearly needs an adding unit, with instructions for how to add. But then it also needs instructions for how to interpret the instructions. And it needs instructions for interpreting those instructions, and so on ad infinitum. We conclude that adding numbers is impossible by any finite machine.
  2. According to modern ideas about quantum gravity, space might not be fundamental, but rather emergent from networks of qubits describing degrees of freedom at the Planck scale. I was once asked: if the universe is a network of qubits, then where are these qubits? Isn’t it meaningless, for example, for two qubits to be “neighbors,” if there’s no pre-existing space for the qubits to be neighboring in?
  3. According to special relativity, nothing can travel faster than light. But suppose I flip a coin, write the outcome in two identical envelopes, then put one envelope on earth and the other on Pluto. I open the envelope on earth. Instantaneously, I’ve changed the state of the envelope on Pluto, from “equally likely to say heads or tails” to “definitely heads” or “definitely tails”! (This is normally phrased in terms of quantum entanglement, but as we see, there’s even a puzzle classically.)

The puzzle about the computer is a stand-in for countless debates I’ve gotten into with non-scientist intellectuals. The resolution, I think, is to specify a state for the computer, involving the numbers to be added (encoded, say, in binary), and a finite control unit that moves across the digits adding and carrying, governed by Boolean logic operations, and ultimately by the laws of physics.  It might be asked: what underlies the laws of physics themselves? And whatever the answer, what underlies that? But those are questions for us. In the meantime, the computer works; everything it needs is contained in its state.

This question about the qubits is a cousin of many others: for example, if the universe is expanding, then what’s it expanding into? These aren’t necessarily bad questions. But from a scientific standpoint, one is perfectly justified to respond: “you’re proposing we tack something new onto the state of the world, such as a second space for ‘our’ space to live in or expand into. So would this second space make a difference to observation? If it never would, why not cut it out?”

The question about the envelopes can be resolved by noticing that your decision on earth, to open your envelope or not, doesn’t affect the probability distribution over envelope contents that would be perceived by an observer on Pluto. One can prove a theorem stating that an analogous fact holds even in the quantum case, and even if there’s quantum entanglement between earth and Pluto: nothing you choose to do here changes the local quantum state (the so-called density matrix) over there. This is why, contrary to Einstein’s worries, quantum mechanics is consistent with special relativity.

The central insight here—of equal importance to relativity, quantum mechanics, gauge theory, cryptography, artificial intelligence, and probably 500 other fields—could be summarized as “a difference that makes no difference is not a difference at all.” This slogan might remind some readers of the early 20th-century doctrine of logical positivism, or of Popper’s insistence that a theory that never ventures any falsifiable prediction is unscientific. For our purposes, though, there’s no need to venture into the complicated debates about what exactly the positivists or Popper got right or wrong (or whether positivism is itself positivistic, or falsifiability falsifiable).

It suffices to concentrate on a simpler lesson: that yes, there’s a real world, external to our sense impressions, but we don’t get to dictate from our armchairs what its state consists of. Our job is to craft an ontology around our best scientific theories, rather than the opposite. That is, our conception of “what’s actually out there” always has to be open to revision, both to include new things that we’ve discovered can be distinguished by observation and to exclude things that we’ve realized can’t be.

Some people seem to find it impoverishing to restrict their ontology to the state, to that which suffices to explain observations. But consider the alternatives. Charlatans, racists, and bigots of every persuasion are constantly urging us to look beyond a system’s state to its hidden essences, to make distinctions where none are called for.

Lack of clarity about the notion of “state” is even behind many confusions over free will.  Many people stress the fact that, according to physics, your future choices are “determined” by the current state of the universe.  But this ignores the fact that, regardless of what physics had to say on the subject, the universe’s current state could always be defined in such a way that it secretly determined future choices—and indeed, that’s exactly what so-called hidden-variable interpretations of quantum mechanics, such as Bohmian mechanics, do.  To me, this makes “determination” an almost vacuous concept in these discussions, and actual predictability much more important.

State is my choice for a scientific concept that should be more widely known, because buried inside it, I think, is the whole scientific worldview.

max_tegmark's picture
Physicist, MIT; Researcher, Precision Cosmology; Scientific Director, Foundational Questions Institute; President, Future of Life Institute; Author, Life 3.0

What do waves, computations and conscious experiences have in common, that provides crucial clues about the future of intelligence? They all share an intriguing ability to take on a life of their own that’s rather independent of their physical substrate. 

Waves have properties such as speed, wavelength and frequency, and we physicists can study the equations they obey without even needing to know what substance they are waves in. When you hear something, you're detecting sound waves caused by molecules bouncing around in the mixture of gases we call air, and we can calculate all sorts of interesting things about these waves—how their intensity fades as the square of the distance, how they bend when they pass through open doors, how they reflect off of walls and cause echoes, etc.—without knowing what air is made of.

We can ignore all details about oxygen, nitrogen, carbon dioxide, etc., because the only property of the wave's substrate that matters and enters into the famous wave equation is a single number that we can measure: the wave speed, which in this case is about 300 meters per second. Indeed, this wave equation that MIT students are now studying was first discovered and put to great use long before physicists had even established that atoms and molecules existed! 

Alan Turing famously proved that computations are substrate-independent as well: There’s a vast variety of different computer architectures that are “universal” in the sense that they can all perform the exact same computations. So if you're a conscious superintelligent character in a future computer game, you'd have no way of knowing whether you ran on a desktop, a tablet or a phone, because you would be substrate-independent.

Nor could you tell whether the logic gates of the computer were made of transistors, optical circuits or other hardware, or even what the fundamental laws of physics were. Because of this substrate-independence, shrewd engineers have been able to repeatedly replace the technologies inside our computers with dramatically better ones without changing the software, making computation twice as cheap roughly every couple of years for over a century, cutting the computer cost a whopping million million million times since my grandmothers were born. It’s precisely this substrate-independence of computation that implies that artificial intelligence is possible: Intelligence doesn't require flesh, blood or carbon atoms. 

This example illustrates three important points.

First, substrate-independence doesn't mean that a substrate is unnecessary, but that most details of it don't matter. You obviously can't have sound waves in a gas if there's no gas, but any gas whatsoever will suffice. Similarly, you obviously can't have computation without matter, but any matter will do as long as it can be arranged into logic gates, connected neurons or some other building block enabling universal computation.

Second, the substrate-independent phenomenon takes on a life of its own, independent of its substrate. A wave can travel across a lake, even though none of its water molecules do—they mostly bob up and down.

Third, it's often only the substrate-independent aspect that we're interested in: A surfer usually cares more about the position and height of a wave than about its detailed molecular composition, and if two programmers are jointly hunting a bug in their code, they're probably not discussing transistors.

Since childhood, I’ve wondered how tangible physical stuff such as flesh and blood can give rise to something that feels as intangible, abstract and ethereal as intelligence and consciousness. We’ve now arrived at the answer: these phenomena feel so non-physical because they're substrate-independent, taking on a life of their own that doesn't depend on or reflect the physical details. We still don’t understand intelligence to the point of building machines that can match all human abilities, but AI researchers are striking ever more abilities from their can’t-do list, from image classification to Go-playing, speech recognition, translation and driving.

But what about consciousness, by which I mean simply "subjective experience"? When you’re driving a car, you’re having a conscious experience of colors, sounds, emotions, etc. But why are you experiencing anything at all? Does it feel like anything to be a self-driving car? This is what David Chalmers calls the "hard problem," and it’s distinct from merely asking how intelligence works. 

I've been arguing for decades that consciousness is the way information feels when being processed in certain complex ways. This leads to a radical idea that I really like: If consciousness is the way that information feels when it’s processed in certain ways, then it must be substrate-independent; it's only the structure of the information processing that matters, not the structure of the matter doing the information processing. In other words, consciousness is substrate-independent twice over!

We know that when particles move around in spacetime in patterns obeying certain principles, they give rise to substrate-independent phenomena—e.g. waves and computations. We've now taken this idea to another level: If the information processing itself obeys certain principles, it can give rise to the higher level substrate-independent phenomenon that we call consciousness. This places your conscious experience not one but two levels up from the matter. No wonder your mind feels non-physical! We don’t yet know what principles information processing needs to obey to be conscious, but concrete proposals have been made that neuroscientists are trying to test experimentally.

However, one lesson from substrate-independence is already clear: we should reject carbon-chauvinism and the common view that our intelligent machines will always be our unconscious slaves. Computation, intelligence and consciousness are patterns in the spacetime arrangement of particles that take on a life of their own, and it's not the particles but the patterns that really matter! Matter doesn't matter.

peter_lee's picture
Corporate Vice President, Microsoft Research

You can never understand one language until you understand at least two.

This statement by the English writer, Geoffrey Willans, feels intuitive to anyone who has studied a second language. The idea is that learning to speak a foreign language inescapably conveys deeper understanding of one’s native language. Goethe, in fact, found this such a powerful concept that he felt moved to make a similar, but more extreme, assertion:

He who does not know foreign languages does not know anything about his own.

As compelling as this may be, what is perhaps surprising is that the essence of this idea—that learning or improvement in one skill or mental function can positively influence another one—is present not only in human intelligence, but also in machine intelligence. The effect is called transfer learning, and besides being an area of fundamental research in machine learning, it has potentially wide-ranging practical applications.

Today, the field of machine learning, which is the scientific study of algorithms whose capabilities improve with experience, has been making startling advances. Some of these advances have led to computing systems that are competent in skills that are associated with human intelligence, sometimes to levels that not only approach man’s capabilities but, in some cases, exceeds it. This includes, for example, the ability to understand, process, and even translate languages. In recent years, much of the research in machine learning has focused on the algorithmic concept of deep neural networks, or DNNs, which learn essentially by inferring patterns—often patterns of remarkable complexity—from large amounts of data. For example, a DNN-based machine can be fed many thousands of snippets of recorded English utterances, each one paired with its text transcription, and from this discern the patterns of correlation between the speech recordings and the paired transcriptions. These inferred correlation patterns get precise enough that, eventually, the system can “understand” English speech. In fact, today’s DNNs are so good that, when given enough training examples and a powerful enough computer, they can listen to a person speaking and make fewer transcription errors than would any human.

What may be surprising to some is that computerized learning machines exhibit transfer learning. For example, let’s consider an experiment involving two machine-learning systems, which for the sake of simplicity we’ll refer to as machines A and B. Machine A uses a brand-new DNN, whereas machine B uses a DNN that has been trained previously to understand English. Now, suppose we train both A and B on identical sets of recorded Mandarin utterances, along with their transcriptions. What happens? Remarkably, machine B (the previously English-trained one) ends up with better Mandarin capabilities than machine A. In effect, the system’s prior training on English ends up transferring capabilities to the related task of understanding Mandarin.

But there is an even more astonishing outcome of this experiment. Machine B not only ends up better on Mandarin, but B’s ability to understand English is also improved! It seems that Willans and Goethe were onto something—learning a second language enables deeper learning about both languages, even for a machine.

The idea of transfer learning is still the subject of basic research, and as such, many fundamental questions remain open. For example, not all “transfers” are useful because, at a minimum, for transfer to work well, there appears to be a need for the learned tasks to be “related” in ways that still elude precise definition or scientific analysis. There are connections to related concepts in other fields, such as cognitive science and learning theory, still to be elucidated. And while it is intellectually dangerous for any computer scientist to engage in “anthropomorphizing” computer systems, we cannot avoid acknowledging that transfer learning creates a powerful, alluring analogy between learning in humans and machines; surely, if general artificial intelligence is ever to become real, transfer learning would seem likely to be one of the fundamental factors in its creation. For the more philosophically minded, formal models of transfer learning may contribute to new insights and taxonomies for knowledge and knowledge transfer.

There is also exceptionally high potential for applications of transfer learning. So much of the practical value of machine learning, for example in search and information retrieval, has traditionally focused on systems that learn from the massive datasets and people available on the World-Wide Web. But what can web-trained systems learn about smaller communities, organizations, or even individuals? Can we foresee a future where intelligent machines are able to learn useful tasks that are highly specialized to a specific individual or small organization? Transfer learning opens the possibility that all the intelligence of the web can form the foundation of machine-learned systems, from which more individualized intelligence is learned, through transfer learning. Achieving this would amount to another step towards the democratization of machine intelligence.

andy_clark's picture
Professor of Cognitive Philosophy, Department of Philosophy and Department of Informatics, University of Sussex, Brighton, UK; Author, Surfing Uncertainty: Prediction, Action, and the Embodied Mind

This sounds dry and technical. But it may well be the key thing that brains do that enables us to experience a world populated by things and events that matter to us. If so, it is a major part of the solution to the mind-body problem itself. It is also a concept that can change how we feel about our own daily experience. Brains like ours, if recent waves of scientific work using this concept are on track, are fundamentally trying to minimize errors concerning their own predictions of the incoming sensory stream.

Consider something as commonplace as it is potentially extremely puzzling—the capacity of humans and many other animals to find specific absences salient. A repeated series of notes, followed by an omitted note, results in a distinctive experience—it is an experience that presents a world in which that very note is strikingly absent. How can a very specific absence make such a strong impression on the mind?

The best explanation is that the incoming sensory stream is processed relative to a set of predictions about what should be happening at our sensory peripheries right now. These, mostly unconscious, expectations prepare us to deal rapidly and efficiently with the stream of signals coming from the world. If the sensory signal is as expected, we can launch responses that we have already started to prepare. If it is not as expected, then a distinctive signal results: a so-called "prediction-error" signal. These signals, calculated in every area and at every level of neuronal processing, highlight what we got wrong, and invite the brain to try again. Brains like this are forever trying to guess the shape and evolution of the current sensory signal, using what we know about the world.

Human experience here involves a delicate combination of what the brain expects and what the current waves of sensory evidence suggest. Thus, in the case of the unexpectedly omitted sound, there is evidence that the brain briefly starts to respond as if the missing sound were present, before the absence of the expected sensory "evidence" generates a large prediction error. We thus faintly hallucinate the onset of the expected sound, before suddenly becoming strikingly aware of its absence. And what we thus become aware of is not just any old absence but the absence of that specific sound. This explains why our experiential world often seems full of real absences.

When things go wrong, attention to prediction and prediction error can be illuminating too. Schizophrenic individuals have been shown to rely less heavily on their own sensory predictions than neuro-typical folk. Schizophrenic subjects outperformed neuro-typical ones in tests that involved tracking a dot that unexpectedly changed direction, but were worse at dealing with predictable effects. Autistic individuals are also impaired in the use of "top-down" predictions, so that the sensory flow seems continually surprising and hard to manage. Placebo and nocebo effects are also grist for the mill. For predictions can look inwards too, allowing how we expect to feel to make a strong and perfectly real contribution to how we actually do feel.

By seeing experience as a construct that merges prediction and sensory evidence, we begin to see how minds like ours reveal a world of human-relevant stuff. For the patterns of sensory stimulation that we most strongly predict are the patterns that matter most to us as both as and as individuals. The world revealed to the predictive brain is a world permeated with human mattering.

stephen_m_kosslyn's picture
Founding Dean, Minerva Schools at the Keck Graduate Institute

When he was sixteen years old, Albert Einstein imagined that he was chasing after a beam of light and observed what he “saw.” And this vision launched him on the path to developing his theory of special relativity. Einstein often engaged in such thinking; he reported: "…The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be ‘voluntarily’ reproduced and combined... this combinatory play seems to be the essential feature in productive thought before there is any connection with logical construction in words or other kinds of signs which can be communicated to others..."

Einstein was relying on mental emulation, a kind of thought that many of us use and we all probably could use more frequently—and more productively—if we become aware of it. Mental emulations range from the sublime to the ordinary, such as imagining currently impossible events (including chasing after a beam of light), or imagining the best route to take when climbing up a hill, or visualizing the best way to pack your refrigerator before starting to take cans, boxes and jugs out of your shopping bag.

A mental emulation is a way to simulate what you would expect to happen in a specific situation. Just as Einstein reported, most mental emulations appear to involve mental imagery. By definition, mental imagery is an internal representation that mimics perception. You can visualize “running” an event in your head, and “seeing” how it unfolds. Mental emulations are partly a way of allowing you to access knowledge that you have only implicitly, to allow such knowledge to affect your mental images.

My colleague Sam Moulton and I characterize mental emulation as follows: Each step of what you imagine represents the corresponding step in the real world, and the transitions between steps represent the corresponding transitions between steps in the event. Moreover, the processes that transition from state to state in the representation mimic the actual processes that transition from state-to-state in the world. If you imagine kicking a ball, you will actually activate motor circuits in the brain, and neural states will specify the transitional states of how your leg is positioned as you kick. And as you imagine tracking the flying ball, your mental images will transition through the intermediate states in ways analogous to what would happen in the real world.

Mental emulation is a fundamental form of thought, and should be recognized as such.

anthony_aguirre's picture
Professor of Physics, University of California, Santa Cruz; Author, Cosmological Koans

“Information” is a term with many meanings, but in physics or information theory we can quantify information as the specificity of a subset of many possibilities. Consider an 8-digit binary string like 00101010. There are 256 such strings, but if we specify that the first digit is “0” we define a subset of 128 strings with that property; this reduction corresponds (via a base-2 logarithm in the formula) to one bit of information. The string 00101010, one out of 256, has 8 bits of information. This information, which is created by pointing to particular instances out of many possibilities, can be called indexical information. (A closely related concept is statistical information, in which each possibility is assigned a probability.)

This notion of information has interesting implications. You might imagine that combining many strings like 00101010 would necessarily represent more information, but it doesn’t! Throw two such strings in a bag, and your bag now contains less than 8 bits of information: two of the 256 possibilities are there, which is less specific than either one.        

This paradoxical situation is the basis for Borges’ story of the Library of Babel, which contains all possible 410-page books. In having all possibilities, it is information-free, and completely useless. Yet suppose one had an index, which pointed to all the books composed of actual words, then among those the books in sensible English, and of those, the books of competent philosophy. Suddenly, there is information—and the index has created it! This fable also makes clear that more information is not necessarily better. Pointing to any single book creates the maximal amount of indexical information, but a good index would create a smaller quantity of more useful information.       

This creation of indexical information by pointing to what is important to us underlies many creative endeavors. One could write a computer program to spit out all possible sequences of musical notes, or all possible mathematical theorems provable from some set of axioms. But it would not be writing music, nor doing mathematics—those endeavors select the tiny subset of possibilities that are interesting and beautiful.        

What are most people most interested in? Themselves! Of all the people ever to live, you are you, and this creates indexical information particular to you. For example, indexical information results from “our” position in space-time. Why are there no dinosaurs about? Why is the ocean water rather than methane? Those exist in other times and places—but we are here, and now. Without here and now, there is of course no fact of the matter about whether dinosaurs exist; but we are accustomed to considering here, and especially now, to be objective facts of the world. Modern physics indicates this is unlikely.        

Indeed, some modern physical theories such as the “many worlds” view of quantum mechanics, or that of an infinite universe, this indexical information takes on fundamental importance. In both scenarios there are many or infinitely many copies of “you” in existence that are indistinguishable in their experiences but that are embedded in different larger environments. Through time, as you make observations—say of whether it starts to rain—you narrow down the subset of “yous” compatible with what you have seen, and the information you have gained is indexical.        

In a sufficiently large Universe we can indeed even ask if there is anything but indexical information. The information about no-dinosaurs, yes-water, and what-weather is missing from the World, which may contain little or no information, but it is present in our World, which contains an immense amount. This information is forged by our point of view, and created by us, as our individual view takes a simple, objective world with little or no content, and draws from it a rich, interesting information structure we call reality.

gary_klein's picture
Senior Scientist, MacroCognition LLC; Author, Seeing What Others Don't: The Remarkable Ways We Gain Insights

You may have worked so closely with a partner that you reached a point where each of you could finish the other’s sentences. You had a pretty good idea of how your partner would respond to an event. What’s behind such a skill?  

Decentering is the activity of taking the perspective of another entity. When we look into the past, we try to explain why that entity behaved in a way that might have surprised us. Peering into the future, we decenter in order to anticipate what that entity is likely to do.

Skilled decentering comes into play, not just with partners, but also with strangers and even with adversaries. It gives us an edge in combat, to get ready to ward off an adversary’s attack. It helps authors write more clearly as they anticipate what might confuse a reader or what a reader expects to find out next. Police officers who are good at decentering can de-escalate a tense situation by moving an encounter into less volatile directions. Teachers rely on decentering to keep the attention of students by posing questions or creating suspense, anticipating what will catch the attention of their students. Team members who can quickly decenter can predict how each will react to unexpected changes in conditions, increasing their coordination.

Decentering is not about empathy—intuiting how others might be feeling. Rather, it is about intuiting what others are thinking. It is about imagining what is going through another person’s mind. It is about getting inside someone else’s head.

Despite its importance, particularly for social interaction, decentering has received very little attention. Military researchers have struggled to find ways to study decentering and to train commanders to take the perspective of an adversary.  Social psychologists have not made much progress in unpacking decentering. Part of the problem might be that researchers examine the average decentering accuracy of observers whereas they should be investigating those observers whose accuracy is consistently above average. What are their secrets? What is behind their expertise?

It could be even more valuable to study decentering outside the laboratory—in natural settings. I suspect that some people don’t even try to decenter, others may try but aren’t particularly effective, and still others may be endowed with an uncanny ability to take another person’s perspective.

Decentering also comes into play when we interact with inanimate objects such as intelligent technologies that are intended to help us make decisions. That’s why the definition at the beginning of this essay refers to entities rather than people. The human factors psychologist Earl Wiener once described the three typical questions people ask when interacting with information technology: What is it doing? Why is it doing that? What will it do next? We are trying to take the perspective of the decision aid.

It would be nice if the decision aid could help us decenter, the way a teammate could. After all, when we interact with a person, it is natural to ask that person to explain his or her choice. But intelligent systems struggle to explain their reasons. Sometimes they just recite all the factors that went into the choice, which isn’t particularly helpful. We want to hear less, not more. We want the minimum necessary information. And the minimum necessary information depends on decentering.

For example, if you are using a GPS system to drive to a location, and the device tells you to turn left whereas you expected to turn right, you would NOT want the system to explain itself by showing the logic it followed. Rather, the system should understand why you expected a right-hand turn at that juncture and should provide the central data element (e.g., an accident up ahead is creating thirty-minute delays on that route) you need to understand the choice.

If you were driving with a human navigator who unexpectedly advised you to turn left, all you would have to do is say, “Left?” Most navigators instantly determine from your intonation that you want an explanation, not a louder repetition. Good navigators grasp that you, the driver, didn’t expect a left-hand turn and would quickly (remember, there is traffic getting ready to honk at you) explain, “Heavy traffic ahead.” Your navigator would convey the minimum necessary information, the gist of the explanation. But the gist depends on your own beliefs and expectations—it depends on the navigator’s ability to decenter and get inside your head. Smart technology will never be really smart until it can decenter and anticipate. That’s when it will become a truly smart partner.

When we are uncomfortable with the recommendations offered by a decision aid we don’t have any easy ways to enter into a dialog. In contrast, when we dislike a person’s suggestions we can examine his or her reasons. Being able to take someone else’s perspective lets people disagree without escalating into conflicts. It allows people to be more decent in their interactions with each other. If they can decenter, then they can become decenter.

ziyad_marar's picture
President of Global Publishing, SAGE; Author, Judged: The Value of Being Misunderstood

We know we are ultra-social animals and yet have a consistent blind spot about how truly social we are. Our naïve realism leans us toward a self-image as individual, atomistic rational agents experiencing life as though peering out on the world through a window. And like the fish that swims unaware of the water in which it is suspended, we struggle to see the social reality in which our actions are meaningfully conducted.

Contrary to this view, psychology has shown repeatedly how deeply permeated each of us is by a social identity. This is an important corrective to our illusory self-image and gives us a better insight into our social natures. But even where our social identity has entered the picture it is often crucially misunderstood.

Social identity has been explored in earnest ever since the Second World War in an attempt to understand how it was possible for ordinary people to commit, or at least to allow, genocidal horrors. Much of this work has suggested, such as the Milgram experiments on obedience, that if you dial up the social you dial down the thinking individual. Henri Tajfel’s minimal group experiments divided boys into two arbitrary groups (each group was affiliated with a painter they had never heard of) and showed how quickly they started to discriminate against boys in the other group to their own. All it took was the creation of a meaningless boundary to create an ingroup, an outgroup, and the conditions for favoritism and conflict. But this important insight, explored in many other contexts over the decades, has led to a partial understanding of social identity and to unfortunate misinterpretations. Phrases like the bystander effect, diminution of responsibility, groupthink, herd- or mob-mentality, and so on have encouraged the thought that as we become parts of groups we lose our minds and become highly malleable toward irrational or regrettable actions.

But this view gets it backwards to some extent. To introduce the social is not to add distortion to otherwise clear thinking. For good and for ill, our social identities are minded not mindless. Two social psychologists Steve Reicher and Mark Levine looked at when British football fans were willing to help an injured fan of the opposing team. They found that if the subjects thought in terms of being fans of Manchester United they would not be inclined to help a Liverpool fan, but if they were put in the category of football fans generally they would stop and help. Contrary to the typical thought of fans being prone to acting as mindless thugs, they are highly minded depending on which group they see themselves as belonging to.

The important point here is that social identities can change, and as they do the logic of who is seen as "one of us" changes too. My sense of myself as a father, a publisher, a Londoner, a manager or as someone with Arabic heritage and family shapes the decision space around what it is rational for me to think and do quite profoundly. My allegiances, self-esteem, prejudices, willingness to be led or influenced, sense of fairness, sense of solidarity, biases about "people like me," all are to an extent shaped by the collective self that is salient to me at the time. This is not to deny my individuality, it is to recognise how it is irreducibly expressed through a social lens, and that my social identity changes the way it makes sense for me to engage with the world.

This matters because when we see ourselves purely as rational, individual actors we miss the fact that the social is not just providing the context in which we act. It is deeply constitutive of who we are. But if we to turn to the collective view and merely see irrational action, whether "mad" rioters, "crazy" extremists, or "evil" people who have different ideological commitments to our own, we are condemned to judging others without any chance of comprehending them. A better understanding of our truly social identities would equip us not only with the tools to understand better those who we might ordinarily dismiss as irrational, but also to help us better understand our ultra-social selves.

adam_waytz's picture
Psychologist; Associate Professor of Management and Organizations, Kellogg School of Management at Northwestern University; Author, The Power of Human

If you asked one hundred people on the street if they understand how a refrigerator works, most would respond, yes, they do. But ask them to then produce a detailed, step-by-step explanation of how exactly a refrigerator works and you would likely hear silence or stammering. This powerful but inaccurate feeling of knowing is what Leonid Rozenblit and Frank Keil in 2002 termed, the illusion of explanatory depth (IOED), stating, “Most people feel they understand the world with far greater detail, coherence, and depth than they really do.”

Rozenblit and Keil initially demonstrated the IOED through multi-phase studies. In a first phase, they asked participants to rate how well they understood artifacts such as a sewing machine, crossbow, or cell phone. In a second phase, they asked participants to write a detailed explanation of how each artifact works, and afterwards asked them re-rate how well they understand each one. Study after study showed that ratings of self-knowledge dropped dramatically from phase one to phase two, after participants were faced with their inability to explain how the artifact in question operates. Of course, the IOED extends well beyond artifacts, to how we think about scientific fields, mental illnesses, economic markets and virtually anything we are capable of (mis)understanding.

At present, the IOED is profoundly pervasive given that we have infinite access to information, but consume information in a largely superficial fashion. A 2014 survey found that approximately six in ten Americans read news headlines and nothing more. Major geopolitical issues from civil wars in the Middle East to the latest climate change research advances are distilled into tweets, viral videos, memes, “explainer” websites, soundbites on comedy news shows, and daily e-newsletters that get inadvertently re-routed to the spam folder. We consume knowledge widely, but not deeply.

Understanding the IOED allows us to combat political extremism. In 2013, Philip Fernbach and colleagues demonstrated that the IOED underlies people’s policy positions on issues like single-payer health care, a national flat tax, and a cap-and-trade system for carbon emissions. As in Rozenbilt and Keil’s studies, Fernbach and colleagues first asked people to rate how well they understood these issues, and then asked them to explain how each issue works and subsequently re-rate their understanding of each issue. In addition, participants rated the extremity of their attitudes on these issues both before and after offering an explanation. Both self-reported understanding of the issue and attitude extremity dropped significantly after explaining the issue—people who strongly supported or opposed an issue became more moderate. What is more, reduced extremity also reduced willingness to donate money to a group advocating for the issue. These studies suggest the IOED is a powerful tool for cooling off heated political disagreements.

The IOED provides us much-needed humility. In any domain of knowledge, often the most ignorant are the most overconfident in their understanding of that domain. Justin Kruger and David Dunning famously showed that the lowest performers on tests of logical reasoning, grammar, and humor are most likely to overestimate their test scores. Only through gaining expertise in a topic do people recognize its complexity and calibrate their confidence accordingly. Having to explain a phenomenon forces us to confront this complexity and realize our ignorance. At a time where political polarization, income inequality, and urban-rural separation have deeply fractured us over social and economic issues, recognizing our only modest understanding of these issues is a first step to bridging these divides. 

neil_gershenfeld's picture
Physicist, Director, MIT's Center for Bits and Atoms; Co-author, Designing Reality

"Ansatz" is a fancy way to say that scientists make stuff up.

The most common formulation of physics is based on what are called differential equations, which are formulas that relate the rate at which things change. Some of these are easy to solve, some are hard to solve, and some can't be solved. It turns out that there's a deep reason why there's no universal way to find these solutions, because if that existed it would let you answer questions that we know to be uncomputable (thanks to Alan Turing).

But differential equations do have a very handy property: their solutions are unique. That means that if you find a solution, it's the solution. You can guess a solution, try it out, and fiddle with it to see if you can make it work. If it does, your guess is justified after the fact. That's what an ansatz is, a guess that you test. It's a German word that could be translated as initial placement, starting point, approach, or attempt.

Hans Bethe famously did this in 1931 with an ansatz for the behavior of a chain of interacting particles. His solution has since been used to study systems ranging from electrons in a superconducting wire that can carry current without resistance, to trapped atoms in a quantum computer.

There's a similar concept in probability, called a prior. This is a guess that you make before you have any evidence. Once you do make observations, the prior gets updated to become what's called a posterior. It's initially equally plausible for the universe to be explained by a Flying Spaghetti Monster or the Feynman Lectures on Physics; the latter becomes more probable once its predictions are tested.

Finding an ansatz or a prior is a creative rather rigorous process—they can come from hunches, or whims, or rumors. The rigor then comes in how you evaluate them. You could call this a hypothesis, but the way that term is taught misses both how these can start without justification, and how you initially expect them to be wrong but then patch them up.

My favorite approach to research management is "ready fire aim." You have to get ready by doing your homework in an area, then do something without thinking too much about it, then think carefully about what you just did. The problem with the more familiar "ready aim fire" is that if you aim first you can't hit anything unexpected. There's a sense in which everything I've ever done in the lab has failed at what I set out to do, but as a result something better has consistently happened.

Research progress is commonly expected to meet milestones. But a milestone is a marker that measures distance along a highway. To find something that's not already on the map, you need to leave the road and wander about in the woods beside it. The technical term for that is a biased random walk, which is how bacteria search for gradients in chemical concentrations. The historical lesson is just how reliable that random process of discovery is.

The essential misunderstanding between scientists and non-scientists is the perception that scientific knowledge emerges in a stately procession of received knowledge. As a result, ambiguity isn't tolerated, and changing conclusions are seen as a sign of weakness. Conversely, scientists shouldn't defend their beliefs as privileged; what matters is not where they come from, but how they're tested.

Science only appears to be goal-directed after the fact; while it's unfolding it's more like a chaotic dance of improvisation than a victory march. Fire away with your guesses, then be sure to aim.

stuart_firestein's picture
Professor and Chair, Department of Biological Sciences, Columbia University; Fellow, AAAS

It is said that Charles Darwin left on the Beagle as a Natural Philosopher and returned as a Scientist. Not because of anything he did while on the voyage, although he did plenty, but because in 1833 Cambridge professor and Master of Trinity College, the polymath William Whewell (pronounced “hewell”) invented the word scientist. It was not the only word he coined (he also came up with ion, cathode and anode for Michael Faraday), but it is perhaps the most influential. Until Whewell invented the word all those people we would today call scientists—beginning with Aristotle and including Newton, Galileo, Mendel, Galen—were known as Natural Philosophers. The distinction is revealing. Among the purposes of Natural Philosophers was to understand the mind of the creator through the study of the natural world. The study of science was an intellectual pursuit not distinct from theological examination. But that was changing.

Whewell’s suggestion of the term scientist was in response to a challenge from the poet Samuel Taylor Coleridge attending the meeting of the British Association for the Advancement of Science in Cambridge. Coleridge, old and frail, had dragged himself to Cambridge and was determined to make his point. Coleridge stood and insisted that men of science in the modern day should not be referred to as philosophers since they were typically digging, observing, mixing or electrifying—that is, they were empirical men of experimentation and not philosophers of ideas. The remark was intended to be both a compliment and a slight. Science was everyday labor and philosophy was lofty thought. There was much grumbling among those in attendance, when Whewell masterfully suggested that in “analogy with artist we form scientist.” Curiously this almost perfect linguistic accommodation of workmanship and inspiration, of the artisanal and the contemplative, of the everyday and the universal –was not readily accepted. The term scientist came into popular use in America before it was generally adopted in England—and indeed for a time it was erroneously thought to have originated among those crass Americans then ruining the English language. It took some thirty years for it to come into general usage.

The root word science has now been regularly co-opted to mean the supposed rigorous practice of any area that used to be considered “just” scholarship. Thus we have library science, political science, linguistic science, etc., etc. Of course there’s nothing wrong with rigor per se, only that appending the word science doesn’t necessarily make it so. On the other hand, the word scientist, the person that stands behind the concept of science, has not been so coopted—yet. Thus the scientist is still recognizable as someone who does experiments, observes data, theorizes and does her best to explain phenomena. She is still someone who tries hard not to be fooled, knowing they are the easiest person to be fooled (paraphrase of R. Feyman). Most importantly she still knows that she knows far less than she doesn’t know.

The objections of so many 19th century scientists to the word scientist is instructive because we can now see that its coinage was the beginning of a revolution in scientific practice no less disruptive than the first scientific revolution had been. Those Natural Philosophers were wealthy men (with very few exceptions) who dabbled in scientific explorations because they were considered to be the highest form of intellectual pursuit. They were not workers or laborers and they would never have considered their scientific enterprise as something so pedestrian as a job. None were paid for their work, there were no grants, and no one would have thought to patent their work. But that was about to change. Science was indeed to become a career, a position in society and in the academy. At least in theory anyone could become a scientist with sufficient training and intellect. Science was professionalized and the scientist was a professional.

But how unfortunate is it that we have lost Whewell’s brilliant consilience (also a word he invented) between art and science—that “in analogy with artist we form scientist.” The professionalism of science has overtaken, in the public mind and the mindset of many scientific institutions, the importance of values like creativity, intuition, imagination and inspiration in the scientific process. Too often believing there is a simple recipe for producing cures and gadgets—the so called Scientific Method (invented by Whewell’s contemporary and sometime sparring partner, Francis Bacon), we are disappointed when scientists say that they are uncertain or that there are changing opinions, i.e., new ideas, about this or that supposedly settled fact.

Coleridge, the poet whose quarrels goaded Whewell into inventing the scientist, was actually quite attracted to science (claiming it provided him with some of his best metaphors) and was a close friend and confidant of the famed chemist and head of the Royal Institute, Humphry Davy. In an especially notable correspondence with Davy, Coleridge likened science to art because it was “…necessarily an activity performed with the passion of Hope, it was poetical.” Perhaps the modern scientist meme should be updated to include more of the hopeful poet than authoritarian demagogue.

rebecca_newberger_goldstein's picture
Philosopher, Novelist; Recipient, 2014 National Humanities Medal; Author, Plato at the Googleplex; 36 Arguments for the Existence of God: A Work of Fiction

Has science discovered the existence of protons and proteins, neurons and neutrinos? Have we learned that particles are excitations of underlying quantum fields and that the transmission of inherited characteristics is accomplished by way of information-encoding genes? Those who answer no (as opposed to dunno) probably aren’t unsophisticated science deniers. More likely they’re sophisticated deniers of scientific realism.

Scientific realism is the view that science expands upon—and sometimes radically confutes—the view of the world that we gain by means of our sense organs. Scientific theories, according to this view, extend our grasp of reality beyond what we can see and touch, pulling the curtain of our corporeal limitations aside to reveal the existence of whole orders of unobserved and perhaps unobservable things, hypothesized in order to explain observations and having their reference fixed by the laws governing their behavior. In order for theories to be true (or at any rate, approximations of the truth) these things must actually exist. Scientific theories are ontologically committed.

Those who oppose scientific realism are sometimes called scientific non-realists and sometimes, more descriptively, instrumentalists. Their view is that scientific theories are instruments for predictions that don’t extend our knowledge of what exists beyond what is already granted to us by way of observation. Sure, theories seem to make reference to new and exotic entities, but bosons and fermions don’t exist the way raindrops on roses and whiskers on kittens do. Quantum mechanics no more commits us to the existence of quantum fields than the phrase “for our country’s sake” commits us to the existence of sakes. The content of a theory is cashed out in observable terms. A theory is a way of correlating observable input with observable output, the latter known as predictions. Yes, between the report of what has been observed and the prediction of what will be observed there is a whole lot of theory, complete with theoretical terms that function grammatically as nouns. But don’t, as Wittgenstein warned, let language “go on holiday.” These theoretical nouns should be understood as convenient fictions, to be spelled out in operational definitions. Science leaves ontology exactly as it finds it.

Instrumentalism is so deflationary a view of science that one might think it was conceived in the bitter bowels of some humanities department, determined to take science down a notch. But in fact in the 20th century instrumentalism became standard in physics for a variety of reasons, including the difficulties in solving the stubborn measurement problem in quantum mechanics. Then, too, there was strong influence wafting from the direction of logical positivism, the program that, in an effort to keep meaningless metaphysical terms from infiltrating our discourse and turning it into fine-sounding gibberish, had proposed a criterion of meaningfulness that pared the meaning of a proposition down to its mode of verification.

The thrust of these pressures drove many of the most prominent scientists towards instrumentalism, by which scientists could both wash their hands of an unruly quantum reality, rife with seeming paradox, while also toeing the strict positivist line (as evidenced by the frequent use of the word “meaningless.”) The Copenhagen Interpretation, which was accepted as standard, dismissed the question of whether the electron was really a particle or a wave as meaningless, and asserted that to ask where the electron was in between measurements was likewise meaningless.

There were, of course, scientists who resisted—Einstein, Schrödinger, Planck, de Broglie, and, later on, David Bohm and John Stewart Bell, staunch realists all. Said Einstein: “Reality is the business of physics,” which is about as simple and direct a statement of scientific realism as can be. But Einstein’s realism marginalized him.

The Copenhagen Interpretation is no longer the only game in town, and the main competition—for example, the many-worlds interpretation, in which what quantum mechanics describes is a plethora of equally realized possibilities, albeit existing in other worlds, and Bohmian mechanics, in which the unobserved particles are nonetheless real particles having actual positions and actual trajectories—are realist interpretations. Though they differ in their descriptions of what reality is like, they unflinchingly commit themselves to there being a reality that they are attempting to describe. So far as their empirical content goes, all these interpretations are equivalent. They are, from the standpoint of instrumentalism, indistinguishable, but from the standpoint of scientific realism vastly different.

The questions that press up behind the concept of scientific realism are still very much in play, and how we answer them makes a world of difference as to what we see ourselves doing when we’re doing science. Are we employing a device that churns out predictions, or are we satisfying, in the most reliable way that we have, our basic ontological urge to figure out where we are and what we are? Are we never carried, by way of science, beyond the contents of experience, or does science permit us to extend our reach beyond our meager sensory apparatus, enabling us to grasp aspects of reality—the elusive thing in itself—that would be otherwise inaccessible?

What a different picture of science—and of us—these two viewpoints yield. What then could be more central to the scientific mindset than the questions that swirl around scientific realism, since without confronting these questions we can’t even begin to say what the scientific mindset amounts to. 

victoria_wyatt's picture
Associate Professor of History in Art, University of Victoria

When scientific concepts become metaphors, nuances of meaning often get lost. Some such errors are benign. In the vernacular, “quantum leap” has come to mean tremendous change. This misrepresents the physics, but usually without serious implications. There is no philosophy embedded in the metaphor that lends its misuse a particular power.

Yet errors in translation are not always so neutral. As a popular metaphor, the concept “evolve” gets conflated with progress. Evolved means better, as if natural law normally dictates constant improvement over time. In translating progress from species evolution to the metaphor of evolve, the significance of dynamic relationship to a specific environment gets lost. Through natural selection, species become more equipped to survive in their distinct environment. In a different environment, they may find themselves vulnerable. Divorced from context, their measure of progress breaks down. The popular metaphor of evolve misses this crucial point. Evolve often connotes progress without reference to context. Friends playfully tease that they need to evolve. Businesses boast that they have evolved. In such usage, evolved means essentially better, more sophisticated, more developed. As evolving occurs over time, the past is by definition inferior, a lower rung on the linear ladder to the future. Rapid changes in technology magnify the disconnect between present and past. Even the recent past appears more primitive, bearing little relevance to contemporary life.

A keen awareness of history is vital to intelligent decision making today. Used accurately in its scientific sense, a metaphor of evolve would encourage historical acumen by emphasizing the significance of specific context. Evolve in its common usage actually obscures the importance of context, undermining interest in connections with the past. This erasure of relationships bears serious implications. Species evolve in complex non-linear ecosystems. Evolve as the metaphor extracts the measure of progress from a multifaceted historical environment, linking it instead to a simple position in linear time. Recognizing present global challenges demands a different vision, one that acknowledges history and context. Climate change denial does not rise from a deep appreciation of complex dynamic relationships. In simplistic linear paradigms, it thrives.

The popular usage of “evolve” reflects a symptom rather than a cause. The metaphor did not create disinterest in specific complicated conditions. Rather, a widespread preference for simplicity and essentialism over complexity and connections shaped the metaphor. Now it perpetuates that outlook, when all signs point to an urgent need for more accurate apprehensions of reality.

Today we face problems much more daunting than the distortion of scientific concepts in popular metaphors. It’s tempting to consider such misappropriations merely as annoyances. However, they reflect an issue informing many of our greater challenges: a failure to educate about how the sciences relate to humanities, social sciences and fine arts. Without such integrated education, scientific concepts dangerously mutate as non-specialists apply them outside the sciences. When these misunderstandings infiltrate popular language and thought, demand for realistic approaches to global problem-solving suffers.

jared_diamond's picture
Professor of Geography, University of California Los Angeles; Author, Upheaval

You’re much more likely to hear “common sense” invoked as a concept at a cocktail party than at a scientific discussion. In fact, common sense should be invoked more often in scientific discussions, where it is sometimes deficient and scorned. Scientists may string out a detailed argument that reaches an implausible conclusion contradicting common sense. But many other scientists nevertheless accept the implausible conclusion, because they get caught up in the details of the argument.

I was first exposed to this fallacy when my high school teacher of plane geometry, Mr. Bridgess, gave us students a test consisting of a 49-step proof on which we had to comment. The proof purported to demonstrate that all triangles are isosceles, i.e. have two equal sides. Of course that conclusion is wrong: most triangles have unequal sides, and only a tiny fraction has two equal sides. Yet there it was, a 49-step proof, couched in grammatically correct language of geometry, each step apparently impeccable, and leading inexorably to the patently false conclusion that all triangles are isosceles. How could that be?

None of us geometry students detected the reason. It turned out that, somewhere around step 37, the proof asked us to drop a perpendicular bisector from the triangle’s apex to its base, then to do further operations. The proof tacitly assumed that that perpendicular bisector did intersect the triangle’s base, as is true for isosceles and nearly-isosceles triangles. But for triangles whose sides are very unequal in length, the perpendicular bisector doesn’t intersect the base, and all of the proof’s steps from step 38 onwards were fictitious. Conclusion: don’t get bogged down in following the details of a proof, if it leads to an implausible conclusion.

Distinguished scientists who should know better still fall into equivalents of Mr. Bridgess’s trap. I’ll tell you two examples. My first example involves the famous Michelson-Morley experiment, one of the key experiments of modern physics. Beginning in 1881, the American physicists A.A. Michelson and E.W. Morley measured that the speed of light in space did not depend on light’s direction with respect to the Earth’s direction of motion. This discovery became explained only two decades later by Albert Einstein’s theory of relativity, for which the Michelson-Morley experiment offered crucial support.

Another two decades later, though, another physicist carried out a complicated re-analysis of Michelson’s and Morley’s experiment. He concluded that their conclusion had been wrong. If so, that would have shaken the validity of Einstein’s formulation of relativity. Of course Einstein was asked his assessment of the re-analysis. His answer, in effect, was: “I don’t have to waste my time studying the details of that complex re-analysis to figure out what’s wrong with it. Its conclusion is obviously wrong.” That is, Einstein was relying on his common sense. Eventually, other physicists did waste their time on studying the re-analysis, and did discover where it had made a mistake.

That example of Mr. Bridgess’s fallacy comes from the field of physics over 80 years ago. My other example comes from the field of archaeology today. Throughout most of human pre-history, human evolution was confined to the Old World, and the Americas were uninhabited. Humans did eventually penetrate from Siberia over the Bering Strait land bridge into Alaska during the last Ice Age. But, for thousands of years thereafter, they were still prevented from spreading further south by the ice sheet that stretched uninterruptedly across Canada, from the Pacific Ocean to the Atlantic Ocean.

The first well-attested settlement of the Americas south of the Canada/U.S. border occurred around 13,000 years ago, as the ice sheets were melting. That settlement is attested by the sudden appearance of stone tools of the radiocarbon-dated Clovis culture, named after the town of Clovis, New Mexico, where the tools and their significance were first recognized. Clovis tools have now been found over all of the lower 48 U.S. states, south into Mexico. That sudden appearance of a culture abundantly filling up the entire landscape is what one expects and observes whenever humans first colonize fertile empty lands.

But any claim by an archaeologist to have discovered “the first X” is taken as a challenge by other archaeologists to discover an earlier X. In this case, archaeologists feel challenged to discover pre-Clovis sites, i.e. sites with different stone tools and dating to before 13,000 years ago. Every year nowadays, new claims of pre-Clovis sites in the U.S. and South America are advanced, and subjected to detailed scrutiny. Eventually, it turns out that most of those claims are invalidated by the equivalent of technical errors at step 37: e.g., the radiocarbon sample was contaminated with older carbon, or the radiocarbon-dated material really wasn’t associated with the stone tools. But, even after complicated analyses and objections and rebuttals, a few pre-Clovis claims have not yet been invalidated. At present, the most widely discussed such claims are for Chile’s Monte Verde site, Pennsylvania’s Meadowcroft site, and one site each in Texas and in Oregon. As a result, the majority of American archaeologists currently believe in the validity of pre-Clovis settlement.

To me, it seems instead that pre-Clovis believers have fallen into the archaeological equivalent of Mr. Bridgess’s fallacy. It’s absurd to suppose that the first human settlers south of the Canada/U.S. border could have been airlifted by non-stop flights to Chile, Pennsylvania, Oregon, and Texas, leaving no unequivocal signs of their presence at intermediate sites. If there really had been pre-Clovis settlement, we would already know it and would no longer be arguing about it. That’s because there would now be hundreds of undisputed pre-Clovis sites distributed everywhere from the Canada/U.S. border south to Chile.

As Mr. Bridgess told us plane geometry students, “Use common sense, and don’t be seduced by the details. Eventually, someone will discover the errors in those details.” That advice is as true in modern science as it is in plane geometry.

helen_fisher's picture
Biological Anthropologist, Rutgers University; Author, Why Him? Why Her? How to Find and Keep Lasting Love

What makes a happy marriage? Psychologists offer myriad suggestions, from active listening to arguing appropriately and avoiding contempt. But my brain-scanning partners and I have stumbled on what happens in the brain when you are in a long-term, happy partnership.

In research published in 2011 and 2012, we put seven American men and 10 American women (all in their 50s and 60s) into the brain scanner. Their average duration of marriage was 21.4 years; most had grown children; and all maintained that they were still madly in love with their spouse (not just loving but in love). All showed activity in several areas of the dopamine-rich reward system, including the ventral tegmental area and dorsal striatum—brain regions associated with feelings of intense romantic love. All also showed activity in several regions associated with feelings of attachment, as well as those linked with empathy and controlling your own stress and emotions.

These data show that you can remain in love with a partner for the long term.

More intriguing, we found reduced activity in a region of the cerebral cortex known as the ventromedial prefrontal cortex, which is associated with our human tendency to focus on the negative rather than the positive (among many other functions linked to social judgment). These brain functions may have evolved millions of years ago—perhaps primarily as an adaptive response to strangers who wandered into one’s neighborhood. Natural selection has long favored those who responded negatively to the one malevolent intruder, rather than positively to myriad friendly guests.

But reduced activity in this brain region suggests that our happily-in-love long-term partners were overlooking the negative to focus on the positive aspects of their marital relationships—something known to scientists as “positive illusions.” Looking at our brain-scanning results from other experiments, including long-term lovers in China, we found similar patterns. We humans are able to convince ourselves that the real is the ideal.

The neural roots of tolerance, mercy and pardon may live deep in the human psyche.

hans_ulrich_obrist's picture
Curator, Serpentine Gallery, London; Editor: A Brief History of Curating; Formulas for Now; Co-author (with Rem Koolhas), Project Japan: Metabolism Talks

According to James Lovelock's Gaia Hypothesis, the planet Earth is a self-regulated living being. In this captivating theory, the planet, in all its parts, remains in suitable conditions for life thanks to the behavior and action of living organisms.  

Lovelock is an independent scientist, environmentalist, inventor, author and researcher whose early interest in science fiction led him to Olaf Stapledon's idea that the Earth itself may have consciousness. From Erwin Schrödinger's What Is Life, he picked up the theory of “order-from-disorder,” based on the second law of thermodynamics, according to which “entropy only increases in a closed system (such as the universe)” and thus “living matter evades the decay to thermodynamical equilibrium by homeostatically maintaining negative entropy … in an open system.”

As a researcher at NASA, he worked on developing instruments for the analysis of extraterrestrial atmospheres. This led to an interest in potential life forms on Mars. He came up with the idea that to establish whether or not there is life on Mars, all one has to do is to measure the composition of the gases present in the atmosphere.

When I visited Lovelock last year in his home on Chesil Beach, he told me that it was in September 1965 that he had his epiphany. He was in the Jet Propulsion Lab with the astronomer Carl Sagan and the philosopher Diane Hitchcock, who was employed by NASA to look at the logical consistency of the experiments conducted there. Another astronomer entered the office with the results of an analysis of the atmosphere on Venus and Mars. In both cases, it was composed almost entirely from carbon dioxide, while Earth's atmosphere also contains oxygen and methane. Lovelock asked himself why the Earth's atmosphere was so different from its two sister planets. Where do the gases come from?

Reasoning that oxygen comes from plants and methane comes from bacteria—both living things—he suddenly understood that the Earth must be regulating its atmosphere. When Lovelock began to talk about his theory with Sagan, the astronomer's first response was, "Oh, Jim, its nonsense to think that the Earth can regulate itself. Astronomical objects don't do that." But then Sagan said, “Hold on a minute, there is one thing that's been puzzling us astronomers, and that's the cool sun problem: At the Earth's birth the sun was 30 percent cooler than it is now, so why aren't we boiling?”

This brought Lovelock to the realization that “If the animal and plant life regulate the CO2, they can control the temperature.” And that was when Gaia entered the building. While subject to criticism that it's a New Age idea, the first major scientist who took Lovelock's idea to heart was the hard-nosed, empirically-driven biologist and evolutionary theorist, Lynn Margulis. Because Lovelock was trained from the medical side, in bacteriology, he tended to think of bacteria as pathogens. He hadn’t previously thought of them as the great infrastructure that keeps the earth going. As he told me “It was Lynn who drove that home.” Margulis understood that contrary to so many interpretations, the Gaia hypothesis was not a vision of the earth as a single organism but as a jungle of inter-lacing and overlying entities each of which generates their own environment.

Lovelock has tried to persuade humans that they are unwittingly no more than Gaia's disease. The challenge this time is not to protect humans against microbes, but to protect Gaia against those tiny microbes called humans. “Just as bacteria ran the earth for two billion years and ran it very well, keeping it stabilized, " he said, "we are now running the Earth. We’re stumbling a bit, but the future of the Earth depends on us as much as it depended on the bacteria.”

jonathan_b_losos's picture
Monique and Philip Lehner Professor for the Study of Latin America, Professor of Organismic and Evolutionary Biology, Harvard University; Curator in Herpetology, Museum of Comparative Zoology; Author, Improbable Destinies

It’s easy to think of natural selection as omnipotent. As Darwin said, “natural selection is daily and hourly scrutinizing…every variation, even the slightest; rejecting that which is bad, preserving and adding up all that is good.” And the end result? Through time, a population becomes better and better adapted. Given enough time, wouldn’t we expect natural selection to construct the ideal organism, optimally designed to meet the demands of its environment?

If natural selection worked like an engineer—starting with a blank slate and an unlimited range of materials, designing a blueprint in advance to produce the best possible structure—then the result might indeed be perfection. But that’s not a good analogy for how natural selection works. As Nobel laureate Francois Jacob suggested in 1977, the better metaphor is a tinkerer who “gives his materials unexpected functions to produce a new object. From an old bicycle wheel, he makes a roulette; from a broken chair the cabinet of a radio.” In just this way “evolution does not produce novelties from scratch. It works on what already exists, either transforming a system to give it new functions or combining several systems to produce a more elaborate one.”

What a tinkerer can produce is a function of the materials at hand, and the consequence is that two species facing the same environmental challenge may adapt in different ways. Consider the penguin and the dolphin. Both are speedy marine organisms descended from ancestors that lived on land. Although they live similar lifestyles, chasing down swift prey, they do so in different ways. Most fast-swimming marine predators propel themselves by powerful strokes of their tails, and the dolphin is no exception. But not the penguin—it literally flies through the water, its aquatic celerity propelled by its wings.

Why haven’t penguins evolved a powerful tail for swimming like so many other denizens of the sea? The answer is simple. Birds don’t have tails (they do have tail feathers, but no underlying bones). Natural selection, the tinkerer, had nothing to work with, no tail to modify for force production. What the penguin’s ancestor did have, however, were wings, already well-suited for moving through air. It didn’t take much tinkering to adapt them for locomotion in a different medium.

Sometimes, the tinkerer’s options are limited and the outcome far from perfect. Take, for example, the panda’s “thumb,” made famous by Stephen Jay Gould. As opposable digits go, the modified wrist bone is subpar, limited in flexibility and grasping capabilities. But it gets the job done, helping the panda grasp the bamboo stalks on which it feeds.

Or consider another example. The long and flexible neck of the swan is constructed of twenty-five vertebrae. Elasmosaurus, a giant marine reptile from the Age of Dinosaurs with a contortionist’s neck as long as its body, took this approach to the extreme, with seventy-two vertebrae. Pity, then, the poor giraffe, with only seven long, blocky vertebrae in its seven-foot bridge. Wouldn’t more bones have made the giraffe more graceful, better able to maneuver through branches to reach its leafy fare? Probably so, but the tinkerer didn’t have the materials. For some reason, mammals are almost universally constrained to have seven cervical vertebrae, no more, no less. Why this is, nobody knows for sure—some have suggested a link between neck vertebra number and childhood cancer. Whatever the reason, natural selection didn’t have the necessary materials—it couldn’t increase the number of vertebrae in the giraffe’s neck. So, it did the next best thing, increasing the size of the individual vertebrae to nearly a foot in length.

There are several messages to be taken from the realization that natural selection functions more like a tinkerer than an engineer. We shouldn’t expect optimality from natural selection—it just gets the job done, taking the easiest, most accessible route. And as a corollary: we humans are not perfection personified, just natural selection’s way of turning a quadrupedal ape into a big-brained biped. Had we not come along, some other species, from some other ancestral stock, might eventually have evolved hyper-intelligence. But from different starting blocks, that species probably would not have looked much like us.

laurence_c_smith's picture
Professor of Environmental Studies, Brown University; Author, Rivers of Power

Ocean acidification, a stealthy side effect of rising anthropogenic CO2 emissions, is a recently discovered, little recognized global climate change threat that ought be more widely known.

Unlike the warming effect on air temperatures that rising atmospheric CO2 levels cause—which scientists have understood theoretically since the late 1800s and began describing forcefully in the late 1970s—the alarm bell for ocean acidification was rung only in 2003, in a brief scientific paper. It introduced the term “ocean acidification” to describe how some of the rising CO2 levels are absorbed and dissolved into surface waters of the ocean. This has the benefit of slowing the pace of air temperature warming (thus far, oceans have absorbed at least a quarter of anthropogenic CO2 emissions) but the detriment of lowering the pH of the world’s oceans.

In chemistry notation, dissolving carbon dioxide in water yields carbonic acid (CO2 + H2O ↔ H2CO3), which quickly converts into bicarbonate (HCO3-) and hydrogen (H+) ions. Hydrogen ion concentration defines pH (hence the “H” part of pH). Already, the concentration of H+ ions in ocean water has increased nearly 30% relative to pre-industrial levels. The pH of the world’s oceans has correspondingly dropped about 0.1 pH unit, and is expected to fall another 0.1 to 0.3 units by the end of this century. These numbers may sound small, but the pH scale is logarithmic, so 1 unit of pH represents a ten-fold change in hydrogen ion concentration.

The increased abundance of bicarbonate ions leads to decreased availability of calcite and aragonite minerals in ocean water, depriving marine mollusks, crustaceans, and corals of the primary ingredients from which they build their protective shells and skeletons. Highly calcified mollusks, echinoderms, and reef-building corals are especially sensitive. Low pH ocean water can also corrode shells directly.

Of the familiar organisms most impacted by this, oysters and mussels are especially vulnerable, but the detrimental effects of ocean acidification go far beyond shelled seafood. They threaten the viability of coral reefs, and of smaller organisms like foraminifera, that comprise the bottom of the food chain for marine food webs. Nor does the problem even stop at shell and skeleton building: New research shows that small changes in ocean water pH alters the behavior of fish, snails, and other mobile creatures. They become stunned and confused. They lose sensory responsiveness to odors and sounds, and can become more easily gobbled up by predators.

How these multiple, cascading impacts will play out for our planet’s marine ecosystem is unknown. What we do know is that as some species suffer, other, more tolerant species will replace them. Spiny sea urchins may do better (up to a point), for example. Jellyfish are especially well positioned to flourish in low pH. In a world of acidifying oceans, we can envision beaches awash not with pretty shells, but with the gelatinous, stinging corpses of dead jellyfish.

Scientists are now investigating novel ways to try to mitigate the ocean acidification problem. Certain species of sea grass, for example, may locally buffer pH. Planting or reintroducing sea grasses could provide some relief in protected estuaries and coves. Selective breeding experiments are underway to develop strains of aquatic plants and animals that are more tolerant of low pH waters. At the extreme end of scientific tinkering, perhaps new, genetically engineered marine organisms may be on the horizon.

While some of these ideas hold promise for particular locations and species, none can stabilize pH at the global scale. Genetic engineering raises a whole other set of ethical and ecological issues. Furthermore, some of the more hopeful “geoengineering” solutions proposed to combat air temperature warming work by increasing the earth’s reflectivity (e.g. blasting sulfate aerosols into the stratosphere, to reflect incoming sunlight back into space). These strategies have problems of their own, like unknown impact on global rainfall patterns, but might mitigate CO2-induced air temperature warming. But because they do nothing to reduce atmospheric CO2 levels, they have zero impact on ocean acidification. At the present time, the only viable way to slow ocean acidification at the global scale is to reduce human induced CO2 emissions. 

steven_r_quartz's picture
Neuroscientist; Professor of Philosophy, Caltech; Co-author, Cool

For at least fifty years, the rational self-interested agent of neoclassical economics, Homo economicus, has been questioned, rebutted, and in some cases disparaged as a model psychopath. A seminal critique appeared in 1968 with the publication of Garrett Hardin’s article, “The Tragedy of the Commons.” Hardin invited the reader to consider a pasture open to all neighboring herdsman. If those herdsmen pursued their own rational self-interest, he reasoned they would continue to add cows to their herds, ultimately leading to the pasture’s destruction. This has the structure of a multi-player Prisoner’s Dilemma wherein the individual pursuit of rational self-interest inevitably leads to social catastrophe.

Contrary to Homo economicus, people appear to care about more than their own material payoffs. They care about fairness and appear to care about the positive welfare of others. They possess what economists refer to as social preferences. In Dictator games, for example, a person is given some monetary endowment that they can share with an anonymous person. People often transfer some money to the other player. This is so despite the fact that the other player is merely a passive participant and cannot punish them for not transferring some money. Such behavior has been interpreted as evidence for strong reciprocity, a form of altruism on which human cooperation may depend. Emotions emerge at the level of psychological mechanism, such as reported links between empathy and sharing in Dictator games, which mark the distinction between real people and Homo economicus.

As much as we may celebrate the fall of Homo economicus, he would never cut off his nose to spite his face. Far less well-known is recent research probing the darker side of departures from rational self-interest. What emerges is a creature fueled by antisocial preferences, who creates a whole variety of social dilemmas. The common feature of antisocial preferences is a willingness to make others worse off even when it comes at a cost to oneself. Such behaviors are distinct from more prosocial ones, such as altruistic punishment, where me may punish someone for violating social norms. It’s more like basic spite, envy, or malice. An emerging class of economic games, such as money burning games and vendetta games, illustrates the difference. In a basic Joy of Destruction game, for example, two players would be given $10 each and then asked if they want to pay $1 to burn $5 of their partner’s income.

Why would someone pay money to inflict harm on another person who has done nothing against them? The expression and intensity of antisocial preferences appears linked to resource scarcity and competition pressures. Among pastoralists of southern Namibia for example, Sebastian Prediger and colleagues found that 40% of pastoralists from low-yield rangelands burned their partner’s money compared to about 23% of pastoralists from high-yield areas.

Antisocial preferences thus follow an evolutionary logic found across nature and rooted in such rudimentary behaviors as bacteria that release toxins to kill closely-related species: harming behaviors reduce competition and should thus covary with competition intensity. In humans, they underlie such real-world behaviors as the rate of “witch” murders in rural Tanzania. As Edward Miguel found there, these murders double during periods of crop failure. The so-called witches are typically elderly women killed by relatives, who are both blamed for causing crop failure and whose death as the most unproductive members of a household helps alleviate economic hardship in times of extremely scarce resources.

Why should the concept of antisocial preferences be more widely-known and used in the general culture? I think there are two main reasons. Although we still tend to blame Homo economicus for many social dilemmas, many are better explained by antisocial preferences. Consider, for example, attitudes toward income redistribution. If these were based on rational self-interest, anyone earning less than mean income should favor redistribution since they stand to benefit from that policy. Since income inequality skews income distribution rightward, with increasing inequality a larger share of the population has income below the mean and so support for redistribution should rise. Yet, empirically this is not the case. One reason is antisocial preferences. As Ilyana Kuziemko and colleagues found, people exhibit “last place aversion” both in the lab and in everyday social contexts. That is, individuals near the bottom of the income distribution oppose redistribution because they fear it might result in people below them either catching up to them or overtaking them, leaving them at the bottom of the status hierarchy.

The second reason why antisocial preferences should be more widely known has to do with long-run trends in resource scarcity and competition pressures. A nearly 40-year trend of broad-based wage stagnation and projections of anemic long-term economic growth mean increasing resource scarcity and competition pressures for the foreseeable future. As a result, we should expect antisocial preferences to increasingly dominate prosocial ones as primary social attitudes. In the United States, for example, the poorest and unhealthiest states are the ones most opposed to Federal programs aimed at helping the poorest and unhealthiest. We can only make sense of such apparent paradoxical human behavior by a broader understanding of the irrational, spiteful and self-destructive behaviors rooted in antisocial preferences and the contexts that trigger them.

ian_bogost's picture
Ivan Allen College Distinguished Chair in Media Studies and Professor of Interactive Computing, Georgia Institute of Technology; Founding Partner, Persuasive Games LLC; Contributing Editor, The Atlantic

Some problems are easy, but most problems are hard. They exceed humans’ ability to grasp and reason about possible answers. That’s not just true of complex scientific and political problems, like making complex economic decisions or building models to address climate change. It’s also true of ordinary life. “Let’s get dinner tonight.” “Okay, but where?” Questions like these quickly descend into existential crisis without some structure. “Who am I, even?”

One way that mathematicians think about complex problems is by means of the possibility space of their possible solutions. (It’s also sometimes called a solution space, or probability space.) In mathematics, possibility spaces are used as a register or ledger of all the possible answers to a problem. For example, the possibility space for a toss of a coin is heads or tails. Of two coins: heads-heads, heads-tails, tails-heads, and tails-tails.

That’s a simple enough example, because any given subset of the possibility space can be measured and recorded. But in other cases, the possibility space might be very large, or even infinite. The forms of possible life in the universe, for example, or the possible future branches of evolution. Or the possible games of Go. Or even all the things you might do with an evening.

In these cases, not only is it difficult or impossible to diagram the possibility space completely, but also it usually doesn’t even really make sense to try. An economist might build a model of possible evenings out from, say, the net marginal benefit of a movie or a bike ride or a beef wellington in relation to the cost of those benefits, but such a practice assumes a rationalism that doesn’t exist in ordinary life.

In game design, creators often think of their work as one of creating possibility spaces for their players. In the ancient Chinese folk game Go, a set of stones, stone placement rules, and a board provide a very large possibility space for overall play. But each individual move is much more limited, reliant on the set of choices each player has made previously. Otherwise, it would be impossible for players ever to make a move. One never plays within the total mathematical possibility space of all games of Go, but within the much narrower set of possible, legal moves on a given board at a given time.

Some designers exalt the mathematical largesse of games like Go and Chess, hoping to produce the largest possibility space with the fewest number of components. But more often, what makes a game aesthetically unique is not how mathematically large or deep it is, but how interesting and unique are its components and their possible arrangements. Tetris only has seven different pieces, all of which operate the same way. The delight of Tetris comes from learning to identify and manipulate each of those pieces in various situations.

The exercise of actively and deliberately limiting a possibility space has utility well beyond science, mathematics, and game design. Every situation can be addressed more deliberately and productively by acknowledging or imposing limitations in order to produce a thinkable, actionable domain of possible action. That doesn’t mean that you have to make utility diagram every time you load the dishwasher or debate an evening out with friends. But rather, that the first step in any problem involves accepting that a wealth of existing limitations are already present, waiting to be acknowledged and activated.

When faced with large or infinite possibility spaces, scientists try to impose limits in order to create measurable, recordable work. An astrobiologist might build a possibility space of possible alien life by limiting inquiry to stars or planets of a certain size and composition, for example. When you debate a venue for an evening meal, you do likewise—even though you probably don’t think about it this way under normal circumstances: What kind of food do you feel like? How much do you want to spend? How far are you willing to travel? Fixing even one or two of them often produces a path toward progress. And it does so while avoiding the descent into an existential spiral, searching ever inward for who you really are, or what you really want as the ultimate source for human choices. In ordinary life, as much as in science, the answers are already there in the world, more than they are invented inside your head.

stuart_a_kauffman's picture
Professor of Biological Sciences, Physics, Astronomy, University of Calgary; Author, Reinventing the Sacred

“Non ergodic” is a fundamental but too little known scientific concept. Non-ergodicity stands in contrast to “ergodicity. “Ergodic” means that the system in question visits all its possible states. In Statistical Mechanics this is based on the famous “ergodic hypothesis, which, mathematically, gives up integration of Newton’s equations of motion for the system. Ergodic systems have no deep sense of “history.” Non-ergodic systems do not visit all of their possible states. In physics perhaps the most familiar case of a non-ergodic system is a spin glass which “breaks” ergodicity and visits only a tiny subset of its possible states, hence exhibits history in a deep sense.

Even more profoundly, the evolution of life in our biosphere is profoundly “non-ergodic” and historical. The universe will not create all possible life forms. This, together with heritable variation, is the substantial basis for Darwin, without yet specifying the means of heritable variation, whose basis Darwin did not know. Non-ergodicity gives us history.

george_church's picture
Professor, Harvard University; Director, Personal Genome Project; Co-author (with Ed Regis), Regenesis
DNA

You might feel that “DNA” is already one of the most widely known scientific terms—with 392 million Google hits and Ngram score rising swiftly since 1946 to surpass terms like bread, pen, bomb, surgery and oxygen. DNA even beats seemingly more general terms like genetics or inheritance. This super-geeky acronym for deoxyribonucleic acid (by the way, not an acid in nature, but a salt) has inspired vast numbers of clichés like “corporate DNA” and cultural tropes like crime scene DNA. It is vital for all life on earth, responsible for the presence of oxygen in our atmosphere, is present in every tissue of every one of our bodies. Nevertheless, knowing even a tiny bit (which could save your life) about your DNA in your own body has probably lagged behind your literacy of sports, fictional characters, and the doings of celebrities.

The news is that you can now read all of your genes in detail for $499 and nearly your complete DNA genome for $999. The nines even make it sound like a consumer pricing. (“Marked down, from $2,999,999,999 to this low, low price. Hurry! supplies are limited!”) But what do you get? If you are fertile, even if you have not yet started making babies or are already “done,” there is a chance that you will produce live human number 7,473,123,456. You and your mate could be healthy carriers of a serious disease causing early childhood pain and death like Tay Sachs, Walker-Warburg, Niemann-Pick-A or Nemaline myopathy. It doesn’t matter if you have no example in your family history, you are still at risk. The cost of care can be $20 million for some genetic diseases, but the psychological impact on the sick child and the family goes far beyond economics. In addition, diagnosis of genetic diseases with adult onset can suggest drugs or surgeries that add many quality adjusted life years (QALYs).

Would you take analogous chances on your current family (for example, refusing air bags)? If not, why avoid knowing your own DNA? As with other (non-genetic) diagnoses, you don’t need to learn anything that is not currently highly predictive and actionable—nor involves action that you’d be unwilling to take. You might get your DNA reading cost reimbursed via your health insurance or healthcare provider, but at $499 is this really an issue? Do we wait for a path to reimbursement before we buy a smart phone, a fancy meal, a new car (with airbags) or remodel the kitchen? Isn’t a healthy child or 10 QALYs for an adult worth it? And before or after you serve yourself, you might give a present to your friends, family or employees. Once we read human genome DNA number 7,473,123,456, then we will have a widely known scientific concept worth celebrating.

coco_krumme's picture
Applied Mathematician, UC Berkeley; Founder, Leeward Co.

"Like all men in Babylon, I have been procounsel; like all, a slave"
Jorge Luis Borges, Lottery in Babylon

The lottery in Babylon begins as a simple game of chance. Tickets are sold, winners are drawn, a prize awarded. Over time, the game evolves. Punishments are doled out alongside prizes. Eventually the lottery becomes compulsory. Its cadence increases, until the outcomes of its drawings come to underpin everything. Mundane events and life turns are subject to the lottery’s "intensification of chance." Or perhaps, as Borges alludes, it is the lottery's explanatory power that grows, as well as that of its shadowy operator the Company, until all occurrences are recast in light of its odds.

Babylonian Lottery is a term borrowed from literature, for which no scientific term exists. It describes the slow encroachment of programmatic chance, or what we like to refer to today as "algorithms."

Today as in Babylon, we feel the weight of these algorithms. They amplify our product choices and news recommendations; they're embedded in our financial markets. While we may not have direct experience building algorithms or for that matter understand their reach—just as the Babylonians never saw the Company—we believe them to be all-encompassing.

Algorithms as rules for computation are nothing new. What is new is the sudden cognizance of their scope. At least three things can be said of the Babylonian lottery, and of our own:

First, the Babylonian lottery increases in complexity and reach over time. Similarly, our algorithms have evolved from deterministic to probabilistic, broadening in scope and incorporating randomness and noisy social signals. A probabilistic computation feels somehow mightier than a deterministic one; we can know it in expectation but not exactly.

Second, while in the beginning all Babylonians understand how the lottery works, over time fewer and fewer do. Similarly, today's algorithms are increasingly specialized. Few can both understand a computational system from first principles and make a meaningful contribution at its bleeding edge.

Third, the Babylonians for some time brushed under the rug the encroachment of the lottery. Because an “intensification of chance” conflicts with our mythologies of self-made meritocracy, we too ignore the impact of algorithms for as long as possible.

So here we are, with algorithms encroaching, few who understand them, and finally waking up. How do we avoid the fate of the Babylonians?

Unfortunately, those of us in the centers of algorithm-creation are barking up the wrong tree. That algorithms are not neutral, but in many cases codify bias or chance, isn’t news to anyone who’s worked with them. But as this codification becomes common knowledge, we look for a culprit.

Some point to the "lack of empathy" of algorithms and their creators. The solution, they suggest, is a set of more empathetic algorithms to subjugate the dispassionate ones. To combat digital distraction, they'd throttle email on Sundays and build apps for meditation. Instead of recommender systems that reveal what you most want to hear, they'd inject a set of countervailing views.

The irony is that these manufactured gestures only intensify the hold of a Babylonian lottery.

We can no more undo the complexity of such lotteries as we can step back in time. We've swept our lottery's impact under the rug of our mythologies for a good while. Its complexity means no one in a position to alter it also understands its workings in totality. And we've seen the futility of building new algorithms to subvert the old.

So what do we do? How could the Babylonians have short-circuited the lottery?

For Borges' Babylonian narrator there are only two ways out. The first is physical departure. We encounter the narrator, in shackles, abroad a ship bound out of town. Whether his escape is a sign of the lottery's shortcoming or a testament to its latitude remains ambiguous.

The second, less equivocal way out of Babylon is found, as we see, in storytelling. By recounting the lottery's evolution, the narrator replaces enumeration with description. A story, like a code of ethics, is unlike any algorithm. Algorithms are rules for determining outcomes. Stories are guides to decision-making along the way.

Telling the tale ensures that the next instantiation of the lottery isn’t merely a newly parameterized version of the old. A story teaches us to make new mistakes rather than recursively repeating the old. It reminds us that the reach of algorithms is perhaps more limited than we fear. By beginning with rather than arriving at meaning, a story can overcome the determinism of chance.

Babylon as a physical place, of course, fell apart. Babylon as a story endures.

dylan_evans's picture
Founder and CEO of Projection Point; Author, The Utopia Experiment

The poet John Keats coined the term negative capability to refer to the ability to remain content with half-knowledge “without any irritable reaching after fact and reason.” The opposite of negative capability is known by psychologists as the need for closure (NFC).  NFC refers to an aversion toward ambiguity and uncertainty, and the desire for a firm answer to a question. When NFC becomes overwhelming, any answer, even a wrong one, is preferable to remaining in a state of confusion and doubt.  

If we could represent the knowledge in any given brain as dry land, and ignorance as water, then even Einstein’s brain would contain just a few tiny islands scattered around in a vast ocean of ignorance. Yet most of us find it hard to admit how little we really know. How often, in the course of our everyday conversations, do we make assertions for which we have no evidence, or cite statistics that are really nothing but guesses?  Behind all these apparently innocuous confabulations lies NFC.

There is nothing wrong with wanting to know the answer to a question, or feeling disturbed by the extent of our ignorance. It is not the reaching after fact and reason that Keats condemns, but the irritable reaching after fact and reason. However great our desire for an answer may be, we must make sure that our desire for truth is even greater, with the result that we prefer to remain in a state of uncertainty rather than filling in the gaps in our knowledge with something we have made up.

Greater awareness of the dangers of NFC would lead to more people saying, “I don’t know” much more often. In fact, everyday conversation would overflow with admissions of ignorance. This would represent a huge leap forward towards the goal of greater rationality in everyday life. 

ian_mcewan's picture
Novelist; Recipient, the Man Booker Prize for Fiction; Author, Sweet Tooth; Solar; On Chesil Beach; Nutshell; Machines Like Me; The Cockroach

The question puts us in danger of resembling the man who only looks for his dropped watch under the light of a street lamp: the scientific concept that has the widest impact in our lives may not necessarily be the simplest. The Navier-Stokes equations date from 1822 and apply Newton's Second Law of Motion to viscous fluids. The range of applications is vast—in weather prediction, aircraft and car design, pollution and flood control, hydro-electric architecture, in the study of climate change, blood flow, ocean currents, tides, turbulence, shock waves and the representation of water in video games or animations.

The name of Claude-Louis Navier is to be found inscribed on the Eiffel Tower, whereas the Irishman, George Stokes, once president of the Royal Society, is not well known outside of maths and physics. Among many other achievements, Stokes laid the foundations of spectroscopy. It needs a John Milton of mathematics to come down among us and metamorphose the equations into lyrical English (or French) so that we can properly celebrate their ingenuity and enduring use, and revive the reputations of these two giants of nineteenth century science.

robert_plomin's picture
Professor of Behavioral Genetics, King's College London; Author, Blueprint

Polygenic scores are beginning to deliver personal genomics from the front lines of the DNA revolution. They make it possible to predict genetic risk and resilience at the level of the individual rather than at the level of the family, which has far-reaching implications for science and society.

Polygenic means many genes. Classical genetic studies over the past century have consistently supported Ronald Fisher’s 1918 theory that the heritability of common disorders and complex traits is caused by many genes of small effect. What had not been realized until recently was just how many and how small these effects are. Systematic gene-hunting studies began a decade ago using hundreds of thousands of DNA differences throughout the genome, called genome-wide association (GWA). The early goal was to break the 1% barrier, that is, to achieve the power to detect DNA associations that account for less than 1% of the variance of common disorders and complex traits. Samples in the tens of thousands were needed to detect such tiny effects, after massively correcting for multiple testing hundreds of thousands of DNA differences in a GWA study. A great surprise was that these GWA studies powered to detect DNA associations that account for 1% of the variance came up empty-handed.

GWA studies needed to break a 0.1% barrier, not just a 1% barrier. This requires samples in the hundreds of thousands. As GWA studies break that barrier, they are scooping up many DNA differences that contribute to heritability. But what good are DNA associations that account for less than 0.1% of the variance? The answer is "not much" if you are a molecular biologist wanting to study pathways from genes to brain to behavior because this means that there is a welter of minuscule paths.

Associations that account for less than 0.1% of the variance are also of no use for prediction. This is where polygenic scores come in. When psychologists create a composite score, like an IQ score or a score on a personality test, they aggregate many items. They don’t worry about the significance or reliability of each item because the goal is to create a reliable composite. In the same way, polygenic scores aggregate many DNA differences to create a composite that predicts genetic propensities for individuals.

A new development in the last year is to go beyond aggregating a few genome-wide significant "hits" from GWA studies. Predictive power of polygenic scores can be increased dramatically by aggregating associations from GWA studies as long as the resulting polygenic score accounts for more variance in an independent sample. Polygenic scores now often include tens of thousands of associations, underlining the extremely polygenic architecture of common disorders and complex traits.   

Polygenic scores derived from GWA studies with sample sizes in the hundreds of thousands can predict substantial amounts of variance. For example, polygenic scores can account for 20% of the variance of height and 10% of the variance in UK national exam scores at the end of compulsory education. This is still a long way from accounting for the entire heritability of 90% for height and 60% for educational achievement—this gap is called missing heritability. Nonetheless, these predictions from an individual’s DNA alone are substantial. For the sake of comparison, the polygenic score for educational achievement is a more powerful predictor than the socioeconomic status of students’ families or the quality of their schools.

Moreover, predictions from polygenic scores have unique causal status. Usually correlations do not imply causation, but correlations involving polygenic scores imply causation in the sense that these correlations are not subject to reverse causation because nothing changes inherited DNA sequence variation. For the same reason, polygenic scores are just as predictive at birth or even prenatally as they are later in life. 

Like all important findings, polygenic scores have potential for bad as well as for good. Polygenic scores deserve to be high on the list of scientific terms that ought to be more widely known so that this discussion can begin.  

sean_carroll's picture
Theoretical Physicist, Caltech; Author, Something Deeply Hidden

You’re worried that your friend is mad at you. You threw a dinner party and didn’t invite them; it’s just the kind of thing they’d be annoyed about. But you’re not really sure. So you send them a text: “Want to hang out tonight?” Twenty minutes later you receive a reply: “Can’t, busy.” How are we to interpret this new information?

Part of the answer comes down to human psychology, of course. But part of it is a bedrock principle of statistical reasoning, known as Bayes’s Theorem.

We turn to Bayes’s Theorem whenever we’re uncertain about the truth of some proposition, and new information comes to light that affects the probability of that proposition being true. The proposition could be our friend’s feelings, or the outcome of the World Cup, or a presidential election, or a particular theory about what happened in the early universe. In other words: we use Bayes’s Theorem literally all the time. We may or may not use it correctly, but it’s everywhere.

The theorem itself isn’t so hard: the probability that a proposition is true, given some new data, is proportional to the probability it was true before that data came in, times the likelihood of the new data if the proposition were true.

So there are two ingredients. First, the prior probability (or simply “the prior”) is the probability we assign to an idea before we gather any new information. Then, the likelihood of some particular piece of data being collected if the idea is correct (or simply “the likelihood”). Bayes’s theorem says that the relative probabilities for different propositions after we collect some new data is just the prior probabilities times the likelihoods.

Scientists use Bayes’s Theorem in a precise, quantitative way all the time. But the theorem—or really, the idea of “Bayesian reasoning” that underlies it—is ubiquitous. Before you sent your friend a text, you had some idea of how likely it was they were mad at you or not. You had, in other words, a prior for the proposition “mad” and another one for “not mad.” When you received their response, you implicitly did a Bayesian updating on those probabilities. What was the likelihood they would send that response if they were mad, and if they weren’t? Multiply by the appropriate priors, and you can now figure out how likely it is that they’re annoyed with you, given your new information.

Behind this bit of dry statistical logic lurk two enormous, profound, worldview-shaping ideas.

One is the very notion of a prior probability. Whether you admit it or not, no matter what data you have, you implicitly have a prior probability for just about every proposition you can think of. If you say, “I have no idea whether that’s true or not,” you’re really just saying, “My prior is 50%.” And there is no objective, cut-and-dried procedure for setting your priors. Different people can dramatically disagree. To one, a photograph that looks like a ghost is incontrovertible evidence for life after death; to another, it’s much more likely to be fake. Given an unlimited amount of evidence and perfect rationality, we should all converge to similar beliefs no matter what priors we start with—but neither evidence nor rationality are perfect or unlimited.

The other big idea is that your degree of belief in an idea should never go all the way to either zero or one. It’s never absolutely impossible to gather a certain bit of data, no matter what the truth is—even the most rigorous scientific experiment is prone to errors, and most of our daily data-collecting is far from rigorous. That’s why science never “proves” anything; we just increase our credences in certain ideas until they are almost (but never exactly) 100%. Bayes’s Theorem reminds us that we should always be open to changing our minds in the face of new information, and tells us exactly what kind of new information we would need.

keith_devlin's picture
Mathematician; Executive Director, H-STAR Institute, Stanford; Author, Finding Fibonacci

When I graduated with a bachelors degree in mathematics from one of the most prestigious university mathematics programs in the world (Kings College London) in 1968, I had acquired a set of skills that guaranteed full employment, wherever I chose to go, for the then-foreseeable future—a state of affairs that had been in existence ever since modern mathematics began some three thousand years earlier. By the turn of the new Millennium, however, just over thirty years later, those skills were essentially worthless, having been very effectively outsourced to machines that did it faster and more reliably, and were made widely available with the onset of first desktop- and then cloud-computing. In a single lifetime, then, I experienced a dramatic change in the nature of mathematics and how it played a role in society.

The shift began with the introduction of the digital arithmetic calculator in the 1960s, which rendered obsolete the need for humans to master the ancient art of mental arithmetical calculation. Over the succeeding decades, the scope of algorithms developed to perform mathematical procedures steadily expanded, culminating in the creation of desktop and cloud-based mathematical computation systems that can execute pretty well any mathematical procedure, solving—accurately and in a fraction of a second—any mathematical problem formulated with sufficient precision (a bar that allows in all the exam questions I and any other math student faced throughout our entire school and university careers).

So what, then, remains in mathematics that people need to master? The answer is, the set of skills required to make effective use of those powerful new (procedural) mathematical tools we can access from our smartphone. Whereas it used to be the case that humans had to master the computational skills required to carry out various mathematical procedures (adding and multiplying numbers, inverting matrices, solving polynomial equations, differentiating analytic functions, solving differential equations, etc.), what is required today is a sufficiently deep understanding of all those procedures, and the underlying concepts they are built on, in order to know when, and how, to use those digitally-implemented tools effectively, productively, and safely.

The most basic of those new skills is number sense. (The other important one is mathematical thinking. But whereas the latter is important only for those going into STEM careers, number sense is a crucial 21st century life-skill for everyone.) Descriptions of the term “number sense” generally run along the lines of “fluidity and flexibility with numbers, a sense of what numbers mean, and an ability to use mental mathematics to negotiate the world and make comparisons.” The well-known mathematics educator Marilyn Burns, in her book, About Teaching Mathematics, describes students with a strong number sense like this: “[They] can think and reason flexibly with numbers, use numbers to solve problems, spot unreasonable answers, understand how numbers can be taken apart and put together in different ways, see connections among operations, figure mentally, and make reasonable estimates.”

In 1989, the US-based National Council of Teachers identified the following five components that characterize number sense: number meaning, number relationships, number magnitude, operations involving numbers and referents for number, and referents for numbers and quantities.

Though to outsiders, mathematics teaching designed to develop number sense can seem “fuzzy” and “imprecise,” it has been well demonstrated that children who do not acquire number sense early in their mathematics education struggle throughout their entire subsequent school and college years, and generally find themselves cut off from any career that requires some mathematical ability.

That outsiders’ misperception is understandable. Compared to the rigid, rule-based, right-or-wrong precision of the math taught in my schooldays, number sense (and mathematical thinking) do seem fuzzy and imprecise. But the fuzziness and imprecision is precisely why that is such an important aspect of mathematics in an era where the rule-based precise part is done by machines. The human brain compares miserably with the digital computer when it comes to performing rule-based procedures. But that human mind can bring something that computers cannot begin to do, and maybe never will: understanding. Desktop-computer and cloud-based mathematics systems provide useful tools to solve the mathematical aspects of real-world problems. But without a human in the driving seat, those tools are totally useless. And high among the “driving abilities” required to do that is number sense.

If you are a parent of a child in the K-12 system, there is today just one thing you should ensure your offspring has mastered in the math class by the time they graduate: number sense. Once they have that, any specific concept or procedure that you or they will find listed in the K-12 curriculum can be mastered quickly and easily as and when required. An analogous state of affairs arises at the college level, with the much broader mathematical thinking in place of number sense.

jennifer_jacquet's picture
Associate Professor of Environmental Studies, NYU; Author, Is Shame Necessary?

To understand earthquakes in Oklahoma, the Earth’s sixth mass extinction, or the rapid melting of the Greenland Ice Sheet, we need the Anthropocene—an epoch that acknowledges humans as a global, geologic force. The Holocene, a cozier geologic epoch that began 11,700 years ago with climatic warming (giving us conditions that, among other things, led to farming), doesn’t cut it anymore. The Holocene is outdated because it cannot explain the recent changes to the planet: the now 400 parts per million of carbon dioxide in the atmosphere from burning fossil fuels, the radioactive elements present in the Earth’s strata from detonating nuclear weapons, or that one in five species in large ecosystems is considered invasive. Humans caused nearly all of the 907 earthquakes in Oklahoma in 2015 as a result of the extraction process for oil and gas, part of which involves injecting saltwater, a byproduct, into rock layers. The Anthropocene is defined by a combination of large-scale human impacts and gives us a concept that provides us with a sense of both our power as well as responsibility.

In 2016, the Anthropocene Working Group, made of 35 individuals, mostly scientists, voted that the Anthropocene should be formalized. This means a proposal should be taken to the overlords of geologic epochs, the International Commission on Stratigraphy, which will decide on whether to formally adopt the Anthropocene into the official geological time scale.

Any epoch needs a starting point, and the Anthropocene Working Group favors a mid-20th century start date, which corresponds to the advent of nuclear technology and a global reach of industrialization, but it won’t be that simple. Two geologists, one of whom is the Chair of the International Commission on Stratigraphy, pointed out that units of geologic time are not defined only by their start date, but also by their content. They argue that the Anthropocene is more a prediction about what could appear in the future rather than what is currently here because, in geologic terms, it is “difficult to distinguish the upper few centimeters of sediment from the underlying Holocene.” At the same time that hardcore geologists are pushing back against formalizing the Anthropocene, a recent article in Nature argued that social scientists should be involved helping determine the start-date of the Anthropocene. Social scientists’ involvement in delineating the geologic time scale would be unprecedented, but then again, so is this new human-led era.

Whether the geologic experts anoint it as an official epoch, enough of society has already decided the Anthropocene is here. Humans are a planetary force. Not since cyanobacteria has a single taxonomic group been so in charge. Humans have proven we are capable of seismic influence, of depleting the ozone layer, of changing the biology of every continent, but not, at least so far, that we are capable of living on any other planet. The more interesting questions may not be about whether the Anthropocene exists or when it began, but about whether we are prepared for this kind of control.

frank_wilczek's picture
Physicist, MIT; Recipient, 2004 Nobel Prize in Physics; Author, Fundamentals

Complementarity is the idea that there can be different ways of describing a system, each useful and internally consistent, which are mutually incompatible. It first emerged as a surprising feature of quantum theory, but I, following Niels Bohr, believe it contains wisdom that is much more widely applicable.  

Here's how it works in quantum theory: The ultimate description of a system is its wave function, but the wave function is not something we can observe directly. It is like a raw material, which must be processed and sculpted in order to build something usable (that is, observable). There are many things we might choose to build from a lode of iron ore—a sword or a plowshare, for example. Both might be useful, in different circumstances. But either uses up the raw material, and precludes building the other. Similarly, we can process the wave function of a particle to predict things about its position, or alternatively to predict things about its velocity, but not both at the same time (Heisenberg's uncertainty principle).  

Quantum theory is complicated, but reality as a whole is even more complicated. In dealing with it, people can and do take many different approaches: scientific, legal, moral, artistic, religious, and others. Each of those approaches can be useful or rewarding in different circumstances. But they involve processing the full complexity of reality in radically different ways, which are often deeply incompatible (for an example, see below). Complementarity is the wisdom to recognize that fact, and to welcome it.

Walt Whitman, at the apex of his Song of Myself, embraced the spirit of complementarity, crowing "Do I contradict myself? Very well then, I contradict myself. I am large, I contain multitudes."

An important example of complementarity arises around the concept of legal responsibility. Generally speaking we do not hold children or insane people responsible for otherwise criminal acts, because they cannot control their behavior. Yet science, on the face of it, suggests that human beings are physical objects, whose behavior is fully determined by physical laws. And that is a very useful perspective to take, if we want to design reading glasses or drugs, for example. But from that perspective, nobody really controls his behavior. The point is that the scientific description of human beings, in terms of the wave function of the quarks, gluons, electrons, and photons that make them up, is not a useful way to describe the way they act. The perception that we exercise will and make choices is based on a coarser but more usable description of humans and their motivations, which comes naturally to us, and guides our legal and moral intuitions.

Understanding the importance of complementarity stimulates imagination, because it gives us license to think different. It also suggests engaged tolerance, as we try to appreciate apparently strange perspectives that other people have come up with. We can take them seriously without compromising our own understandings, scientific and otherwise.

christine_finn's picture
Archaeologist; Journalist; Author, Artifacts, Past Poetic

I am writing this on the winter solstice. At 5.44am Eastern Standard Time I stood on a porch in Vermont and watched the sky. Nothing to discern that was tangible. But somewhere at ancient sites light passed over etched rock. Over millennia, ancient people such as those once here on the banks of the Connecticut River, marked a transition. The science is there for the shortest day. But what emerges for me is a search for absolute acuteness: that nano point where some thing changes, and every thing changes. So, I am excited by liminality.

At a time which celebrates fuzziness and mergings and convergence I am also intrigued by that absolute movement from one stage to another, one which finesses so acutely, it has a point.

My concept to celebrate comes out of social science, but is mediated by ethnography and anthropology, archaeology too in the tangible remainders of transition. My motherlode is "Rites of Passage" or "Les Rites du Passage" by the prehistorian and ethnographer, Arnold Van Gennep, published in 1909. When it was translated from French in the 1960s, it unleashed a torrent of new thinking. The British anthropologist Victor Turner (1920 - 1983) seized on the idea of "liminality." While in Van Gennup's better known thesis, we are all engaged in process: separation, transition, and reincorporation. It was the transitional that held Turner. So the potency of the edge of things, the not quite-ness, appears to be dwelling in the poetry of ambiguity, but it chimes with so much of science which dwells in the periphery, and the stunning space of almost-ness.

As Turner suggested: "Prophets and artists tend to be liminal and marginal people, "edgemen," who strive with a passionate sincerity to rid themselves of the clichés associated with status incumbency and role-playing and to enter into vital relations with other men in fact or imagination. In their productions we may catch glimpses of that unused evolutionary potential in mankind which has not yet been externalized and fixed in structure."

nick_enfield's picture
Professor and Chair, Department of Linguistics, University of Sydney; Author, How We Talk

Suppose that two people witness a crime: one describes in words what they saw, while the other does not. When tested later on their memories of the event, the person who verbally described the incident will be worse at later remembering or recognizing what actually happened. This is verbal overshadowing. Putting an experience into words can result in failures of memory about that experience, whether it be the memory of a person’s face, the color of an object, or the speed that a car was going. This effect was discovered by the psychologist Elizabeth Loftus and her students in experiments exploring witness testimony and the malleability of human memory. The stakes are high. As Loftus has shown, imperfect memories can—and often do—put innocent people behind bars. Our memories define much of what we take to be real. Anything that interferes with memory interferes, effectively, with the truth.

The idea that describing something in words can have a detrimental effect on our memory of it makes sense given that we use words to categorize. To put things in the same category is, by definition, to set aside any information that would distinguish the things from each other. So, if I say “She has three dogs,” my act of labeling the three animals with the single word “dog” treats them as the same, regardless of likely differences in color, size, or breed. There are obvious advantages to words’ power to group things in this way. Differences between things are often irrelevant, and glossing over those differences allows us to reduce effort in both speaking and understanding language. But the findings of Loftus and colleagues suggest that this discarding of distinguishing information by labeling something in a certain way has the effect of overwriting the mental representation of the experience. When words render experience, specific information is not just left out, it is deleted.

Through verbal overshadowing, your words can change your beliefs, and so choice of words is not merely a matter of style or emphasis. And note that the effect is not just an effect of language, but always of a specific language, be it Arabic, German, Japanese, Zulu, or any of the other 6000 languages spoken in the world today. The facts of linguistic diversity suggest a striking implication of verbal overshadowing: that not just different words, but different languages, are distinct filters for reality.

samuel_arbesman's picture
Complexity Scientist; Scientist in Residence at Lux Capital; Author, Overcomplicated

Within the infinite space of computer programs is a special subset of code: programs that, when executed, output that program itself. In other words, these are a kind of self-replicating program; when you run them, they yield themselves. These short programs are often referred to as quines, after the philosopher Willard Van Orman Quine, based on the term from Douglas Hofstadter’s Gödel, Escher, Bach: an Eternal Golden Braid.

Upon first hearing about quines, they often seem magical. And doubly so if you have ever done any coding, because without knowing the trick for how to create these, they might seem devilishly difficult to construct. They are often elegant little things, and there are now examples of quines in a huge number of computer languages. They can range from short and sweet ones to ones that are far more inscrutable to the untrained eye.

But why are they important? Quines are a distillation of numerous ideas from computer science, linguistics, and much more. On a simple level, quines can be thought of as a sort of fixed point, an idea from mathematics: the value of a mathematical function that yields itself unchanged (think along the lines of how the square root of 1 is still 1).

But we can take this further. Quines demonstrate that language, when refracted through the prism of computation, can be both operator and operand—the text of a quine is run, and through a process of feedback upon itself, yields the original code. Text can be both words with meaning and “words with meaning.” Think of the sentence “This sentence has five words.” We are delighted by this because the words are both describing (acting as an operator) and being described (acting as an operand). But this playfulness is also useful. This relationship between text and function is a building block of Kurt Gödel’s work on incompleteness in mathematics, which is in turn related to Alan Turing’s approach to the halting problem. These foundational ideas demonstrate certain limitations in both mathematics and computer science: There are certain statements we cannot prove to be either true or false within a given system, and there is no algorithm that can determine whether any given computer program will ever stop running.

More broadly, quines also demonstrate that reproduction is not the distinct domain of the biological. Just as a cell exploits the laws of physics and chemistry in order to persist and duplicate, a quine coopts the rules of a language of programming to persist when executed. While it doesn’t quite duplicate itself, there are similar principles at work. You can even see further hints of this biological nature in a “radiation hardened” quine: a type of quine where any character can be deleted, and it still will replicate! A radiation-hardened quine appears to look like gibberish, which is no doubt what the DNA sequence of a gene looks like to many of us. Redundancy and robustness, hallmarks of biology, yield similar structures in both the organic and the computational.

John von Neumann, one of the pioneers of computing, gave a great deal of thought to self-replication in machines, binding together biology and technology from the dawn of computation. We see this still in the humble quine, tiny snippets of computer code that by their very existence stitch domain after domain together. 

leo_m_chalupa's picture
Neurobiologist; Professor of Pharmacology and Physiology, George Washington University

Epigenetics is a term that has been around for more than a century, but its usage in the public domain has increased markedly in recent years. In the last decade or so there have been dozens of articles in newspapers (notably the New York Times) and magazines, such as the New Yorker, devoted to this topic. Yet when I queried ten people working in the research office of a major university, only one had a general sense of what the term meant, stating that it deals with “how experience influences genes.” Close enough, but the other nine had no idea, despite the fact that all were college graduates, two were lawyers and four others had graduate degrees. Not satisfied by the results of this unscientific sample, I asked an associate dean of a leading medical school how many first-year medical students knew the meaning of this term. He guessed that the majority would know, but at a subsequent lecture when he did a polling of fifty students, only about a dozen could provide a cogent definition. So there you have it, another example of a lack of knowledge by the educated public of a hot topic among the scientific establishment.

What makes epigenetics important, and why is it so much in vogue these days? Its importance stems from the fact that it provides a means by which biological entities, from plants to humans, can be modified by altering gene activity without changes in the genetic sequence. This means that the age-old “nature versus nurture” controversy has been effectively obviated because experience (as well as a host of other agents) can alter gene activity, so the “either/or” thinking mode no longer applies. Moreover, there is now some tantalizing, but still preliminary evidence that changes in gene activity (induced in this case by an insecticide) can endure for a number of subsequent generations. What happens to you today can affect your great, great, great grandchildren!

As to why epigenetics is a hot research topic, the answer to that is that major progress is being made in the underlying mechanisms by which gene activity can be modified by specific events. Currently, more than a dozen means by which gene expression or gene repression occurs have been documented. Moreover, epigenetics processes have been linked with early development, normal and pathological aging, as well as several disease states, including cancer. The hope is that a fuller understanding of epigenetics will enable us to control and reverse the undesirable outcomes, while enhancing those that we deem beneficial to us and to future generations.

timothy_taylor's picture
Jan Eisner Professor of Archaeology, Comenius University in Bratislava; Author, The Artificial Ape

When is a wine glass not a wine glass? This question fascinated the archaeological theorist David Clarke in the late 1960s, but his elegant solution, critical for a correct conceptualization of artefacts and their evolution over time, is shamefully ignored. Understanding polythetic entitation opens the door to a richer view of the built world, from doors to computers, cars to chairs, torches to toothbrushes. It is a basic analytical tool for understanding absolutely anything that people make. It indicates limits to the idea of the meme, and signals that it may be reasonable to consider the intentional patterning of matter by Homo sapiens as a new, separate kind of ordering in the universe.

Celebrating the end of our archaeological excavation season, someone tops up my glass with wine . . . except it is not a glass. That is, it is not made of glass; it is a clear plastic disposable object, with a stem. Nevertheless, when I put it down on a table to get some food and then cannot find it, I say, "Who took my glass?" In this context, I am effectively blind to its plasticness. But, with the situation transposed to an expensive restaurant and a romantic evening with my wife, I should certainly expect my glass to be made of glass.

Clarke argued that the world of wine glasses was different to the world of biology, where a simple binary key could lead to the identification of a living creature (Does it have a backbone? If so, it is a vertebrate. Is it warm blooded? If so, it is a mammal or bird. Does it produce milk? . . . and so on). A wine glass is a polythetic entity, which means that none of its attributes, without exception, is simultaneously sufficient and necessary for group membership. Most wine glasses are made of clear glass, with a stem and no handle, but there are flower vases with all these, so they are not definitionally-sufficient attributes; and a wine glass may have none of these attributes—they are not absolutely necessary. It is necessary that the wine glass be able to hold liquid and be of a shape and size suitable for drinking from, but this is also true of a teacup. If someone offered me a glass of wine, and then filled me a fine ceramic goblet, I would not complain.

We continually make fine-tuned decisions about which types of artefacts we need for particular events, mostly unconscious of how category definitions shift according to social and cultural contexts. A Styrofoam cup could hold wine, mulled wine, or hot soup, but a heat-proof, handled punch glass would not normally be used for soup, although it would function just as well. Cultural expectations allow a Styrofoam cup to be a wine glass at a student party but not on the lawn at Buckingham Palace; a wine glass in a Viennese café is often a stemless beaker, which would be unusual in London, where it would be read as a water- or juice-glass. And a British mulled wine glass in a metal holder, transposed to Russia, would not formally or materially differ from a tea glass to use with a samovar.

Our cultural insider, or emic, view of objects is both sophisticated and nuanced, but typically maps poorly onto the objectively measurable, multidimensional and clinal formal and material variance—the scientific analyst’s etic of polythetic entitation. Binary keys are no use here.

Asking at the outset whether an object is made of glass takes us down a different avenue from first asking if it has a stem, or if it is designed to hold liquid. The first lumps the majority of wine glasses with window panes; the second groups most of them with vases and table lamps; and the third puts them all into a super-category that includes breast implants and Lake Mead, the Hoover dam reservoir. None of the distinctions provides a useful classificatory starting point. So grouping artefacts according to a kind of biological taxonomy will not do.

As a prehistoric archaeologist David Clarke knew this, and he also knew that he was continually bundling classes of artefacts into groups and sub-groups without knowing whether his classification would have been recognized emically, that is, in terms understandable to the people who created and used the artefacts. Although the answer is that probably they did have different functions, how might one work back from the purely formal, etic, variance—the measurable features or attributes of an artefact—to securely assign it to its proper category? Are Bronze Age beakers, with all their subtypes, really beakers, and were they all used for the same purposes? Were they "memes" of one another, like the sort of coded repeatable information that enables the endless reproduction of individual computer-generated images? Or were some beakers non-beakers, with a distinct socially-acceptable deployment?

Clarke’s view clashes with the common-sense feeling we have about wine glasses having an essential wineglassness. Granted, there can be a Platonically-ideal wine glass if we so wish, but it is specific to times and places, as well as contextual expectation. Currently, the heartland territory of the wine glass is dominated by transparent stemmed drinking vessels made of real glass, but such memic simplicity blurs towards the multidimensional edges of the set as attribute configurations trend towards the sweet-points that mark the core, but never at-once-both-sufficient-and-necessary, attributes we expect in our classic ideas of cultural and technological objects. Clarke noted that the space–time systematics of polythetic attribute sets were extraordinarily complex, patterned at a level beyond our immediate grasp.

So the memic turns out simply to be the emic, a shorthand description only, and not part of a valid analytic once our cultural insider knowledge is removed. For the prehistoric archaeologist, the absence of such knowledge is axiomatic. Determining which attributes had original cultural salience, why and how, is endlessly challenging. Those who attempt it without polythetic entitation are flailing in the dark.

lawrence_m_krauss's picture
Theoretical Physicist; Foundation Professor, School of Earth and Space Exploration and Physics Department, ASU; Author, The Greatest Story Ever Told . . . So Far

Nothing feels better than being certain, but science has taught us over the years that certainty is largely an illusion. In science, we don’t "believe" in things, or claim to know the absolute truth. Something is either likely or unlikely, and we quantify how likely or unlikely. That is perhaps the greatest gift that science can give.

That uncertainty is a gift may seem surprising, but it is precisely for this reason that the scientific concept of uncertainty needs to be better and more broadly understood.

Quantitatively estimating uncertainties—the hallmark of good science—can have a dramatic effect on the conclusions one draws about the world, and it is the only way we can clearly counteract the natural human tendency to assume whatever happens to us is significant.

The physicist Richard Feynman was reportedly fond of saying to people, “You won’t believe what happened to me today!” and then he would add “Absolutely nothing!” We all have meaningless dreams night after night, but dream that a friend breaks their leg, and later hear that a cousin had an accident, and it is easy to assume some correlation. But in a big and old universe even rare accidents happen all the time. Healthy skepticism is required because the easiest person to fool in this regard is oneself.

To avoid the natural tendency to impute spurious significance, all scientific experiments include an explicit quantitative characterization of how likely it is that results are as claimed. Experimental uncertainty is inherent and irremovable. It is not a weakness of the scientific method to recognize this, but a strength.

There are two different sorts of uncertainties attached to any observation. One is purely statistical. Because no measurement apparatus is free from random errors, this implies that any sequence of measurements will vary over some range determined by the accuracy of the measurement apparatus, but also by the size of the sample being measured. Say a million people voting in an election go to the polling booth on two consecutive days and vote for exactly the same candidates on both days. Random measurement errors suggest that if the margin of difference was reported to be less than a few hundred votes, on successive days different candidates might be declared the winner.

Take the recent "tentative" observation of a new particle at the Large Hadron Collider, which would have revolutionized our picture of fundamental physics. After several runs, calculations suggested the statistical likelihood that the result was spurious was less than 1%. But in particle physics, we can usually amass enough data to reduce the uncertainty to a much smaller level (this is not always possible in other areas of science—less than one in a million—before claiming a discovery. And this year, after more data was amassed the signal disappeared.

There is a second kind of uncertainty, called systematic uncertainty, that is generally much harder to quantify. A scale, for example, might not be set to zero when no weight is on it. Experimenters can often test for systematic uncertainties by playing with their apparatus, readjusting the dials and knobs, and see what the effect is, but this is not always possible. In astronomy one cannot fiddle with the Universe. However, one can try to estimate systematic uncertainties in one's conclusions by exploring their sensitivity to uncertainties in the underlying physics that one uses to interpret the data.  

Systematic uncertainties are particularly important when considering unexpected and potentially unlikely discoveries. Say, for example, in the election example I quote earlier, one discovered that there was an error in the design of a ballot so that by accident selecting one candidate ended up sometimes being recorded as voting for two candidates, in which case the ballot would be voided. Even a very small systematic error of this type could then overwhelm the result in any close election.

In 2014 the BICEP experiment claimed to observe gravitational waves from the earliest moments of the Big Bang. This could have been one of the most important scientific discoveries in recent times, if it were true. However, a later analysis discovered an unexpected source of background—dust in our own galaxy. When all the dust had settled, if you forgive the pun, it turned out that the observation had only a 92% likelihood of being correct. In many areas of human activity this would be sufficient to claim its validity. But extraordinary claims require extraordinary evidence. So the cosmology community has decided that no such claim can yet be made.

Over the past several decades we have been able to refine the probabilistic arguments associated with the determination of likelihood and uncertainty, developing an area of mathematics called Bayesian analysis that has turned the science of determining uncertainty into one of the most sophisticated areas of experimental analysis. Here, we first fold in a priori estimates of likelihood, and then see how the evidence changes our estimates. This is science at its best: Evidence can change our minds, and it is better to be wrong rather than be fooled.

In the public arena, scientists' inclusion of uncertainties has been used by some critics to discount otherwise important results. Consider the climate change debate. The evidence for human-induced climate change is neither controversial nor surprising. Fundamental physics arguments anticipate the observed changes. When the data shows that the last sixteen years have been the warmest in recorded human history, and when measured CO2 levels exceed those determined over the past 500,000 years, and when the West Antarctic ice sheet is observed to be melting at an unprecedented rate, the fact that responsible scientists report may small uncertainties associated with each of these measurements, this should not discount the resulting threat we face.

Pasteur once said, “Fortune favors the prepared mind.” Incorporating uncertainties prepares us to make more informed decisions about the future. This does not obviate our ability to draw rational and quantitatively reliable conclusions on which to base our actions—especially when our health and security may depend on them.

gerd_gigerenzer's picture
Psychologist; Director, Harding Center for Risk Literacy, Max Planck Institute for Human Development; Author, How to Stay Smart in a Smart World

Ignorance is generally pictured as an unwanted state of mind, and the notion of deliberate ignorance may raise eyebrows. Yet people often choose to be ignorant, demonstrating a form of negative curiosity at odds with concepts such as ambiguity aversion, a general need for certainty, and the Bayesian principle of total evidence. Such behavior also contrasts with the standard belief that more knowledge and data are always preferred, expressed in various forms from Aristotle (“All men by nature desire to know”) to the view of humans as informavores to the mission of national surveillance programs.

Deliberate ignorance can be defined as the willful decision not to know the answer to a question of personal interest, even if the answer is free, that is, with no search costs. The concept differs from the study of agnotology or the sociology of ignorance, which investigates the systematic production of ignorance by deflecting, covering up, and obscuring knowledge, such as the tobacco industry’s efforts to keep people unaware of the evidence that smoking causes cancer. Deliberate ignorance, in contrast, is not inflicted by third parties but self-chosen. Yet why would people not want to know? The few existing studies and even fewer explanations suggest at least four motives.

The first is to avoid potentially bad news, particularly if no cure or other prevention is available. According to Greek mythology, Apollo granted Cassandra the power of foreseeing the future but added a curse that her prophecies would never be believed. Cassandra foresaw the fall of Troy, the death of her father, and her own murder; anticipating the approach of future horrors became a source of endless pain. Technological progress steadily shifts the line between what we cannot and what we can know in the direction of Cassandra’s powers. When having his full genome sequenced, James Watson, the co-discoverer of DNA, stipulated that his ApoE4 genotype, which indicates risk of Alzheimer’s disease, be both kept from him and deleted from his published genome sequence. Researchers claim to have discovered biomarkers that predict when a person will likely die and from what cause, while others claim to be able to predict whether a marriage will end in divorce. But do you want to know the date of your death? Or whether you should soon consult a divorce lawyer? The few available studies indicate that 85-90% of the general public do not want to know the details surrounding their death and marital stability. Yet unlike the curse condemning Cassandra to foresee the future, technological progress means that we will increasingly often have to decide how much foresight we want.

The second motive is to maintain surprise and suspense. Depending on the country, some 30-40% of parents do not want to know the sex of their unborn child, even after prenatal ultrasound or amniocentesis. For these parents, knowing the answer would destroy their pleasurable feeling of being surprised, a feeling that appears to outweigh the benefit of knowing and being able to better plan ahead.

A third motive is to profit strategically from remaining ignorant, as proposed by economist Thomas Schelling in the 1950s. The game of chicken is an example: people walking through the street staring at their smartphones and ignoring the possibility of a collision, thereby forcing others to do the work of paying attention. Similarly, it has been argued that since the crisis of 2008, bankers and policymakers strategically display blindness in order to ignore the risks in which they continue to engage and to stall effective reform.

Finally, deliberate ignorance is used as a tool for achieving fairness and impartiality. In keeping with Lady Justice, who is often depicted as wearing a blindfold, many U.S. courts do not admit evidence about a defendant’s criminal record. The idea is that the jury should remain ignorant about previous crimes in order to reach an impartial verdict. Rawls’ “veil of ignorance” is another form of ignorance in service of fairness.

Despite these insights, however, the phenomenon of deliberate ignorance has been largely treated as an oddity. Science and science fiction celebrate the value of prediction and total knowledge through Big Data analytics, precision medicine, and surveillance programs largely unquestioned. However, as for Cassandra, foreknowledge may not suit every person’s emotional fabric. How we decide between wanting and not wanting to know is a topic that calls for more scientific attention and greater curiosity.

douglas_rushkoff's picture
Media Analyst; Documentary Writer; Author, Throwing Rocks at the Google Bus

The time-is-money ethos of the Industrial Age and wage labor, combined with the generic quality of computerized time-keeping and digital calendars has all but disconnected us from the temporal rhythms on which biological life has oriented itself for millennia. Like all organisms, the human body has evolved to depend on the cyclical ebbs and flows of light, weather, and even the gravitational pull of the moon in order to function effectively.

But our culture and its technologies are increasingly leading us to behave as if we can defy these cycles—or simply ignore them completely. We fly ten time zones in as many hours, drink coffee and take drugs to wake ourselves, pop sedatives to sleep, and then take SSRI’s to counter the depression that results. We schedule our work and productivity oblivious to the way the lunar cycle influences our moods and alertness, as well as those of our students, customers, and workforces.

Chronobiology is the science of the biological clocks, circadian rhythms, and internal cycles that regulate our organs, hormones, and neurotransmitters. And while most of us know that it’s likely healthier to be active during the day and sleep at night, we still tend to act as if any moment were as good as any other, for anything. It’s not.

For instance, new research suggests that our dominant neurotransmitters change with each of the four weeks of a lunar cycle. The first week of a new moon brings a surge of acetylcholine; the next brings serotonin; then comes dopamine, and finally norepinephrine. During a dopamine week, people would tend to be more social and relaxed, while norepinephrine would make people more analytic. A serotonin week might be good for work, and an acetylcholine week should be full of pep.

Ancient cultures learned of these sorts of cycles through experience—trial and error. They used specific cyclical schedules for everything from planting and harvesting to rituals and conflict. But early science and its emphasis on repeatability treated all time as the same, and saw chronobiology as closer to astrology than physiology. The notion that wood taken from trees dries faster if it is cut at a particular time in the lunar cycle when its sap is at “low tide” seemed more like witchcraft than botany.

But like trees, we humans are subject to the cycles of our biological clocks, most of which use external environmental cues to set themselves. Divorced from these natural cues, we experience the dis-ease of organ systems that have no way to sync up, and an increased dependence on artificial signals for when to do what. We become more at the mercy of artificial cues—from news alerts to the cool light of our computer screens—for a sense of temporality.

If we were to become more aware of chronobiology, we wouldn’t necessarily have to obey all of our evolutionary biases. Unlike our ancestors, we do have light to read at night, heat and air-conditioning to insulate ourselves from the cycle of the seasons, and 24-7 businesses that cater to people on irregular schedules. But we would have the opportunity to reacquaint ourselves with the natural rhythms of our world and the grounding, orienting sensibilities that come with operating in sync or harmony with them.

A rediscovery and wider acknowledgment of chronobiology would also go a long way toward restoring the solidarity and interpersonal connection so many of us are lacking without it. As we all became more aware and respectful of our shared chronobiology, we would be more likely to sync up or even “phase lock” with one another, as well. This would allow us to recover some of the peer-to-peer solidarity and social cohesiveness that we’ve lost to a culture that treats the time like a set of flashing numbers instead of the rhythm of life.    

paul_saffo's picture
Technology Forecaster; Consulting Associate Professor, Stanford University

Toss a mouse from a building. It will land, shake itself off and scamper away. But if similarly dropped, “… a rat is killed, a man is broken, a horse splashes.” So wrote J.B.S. Haldane in his 1926 essay "On Being the Right Size." Size matters, but not in the way a city-stomping Godzilla or King Kong might hope.

Every organism has an optimum size and a change in size inevitably leads to a change in form. Tiny lizards dance weightlessly up walls, but grow one to Godzilla size, and the poor creature would promptly collapse into a mush of fractured bones and crushed organs. This principle is not just a matter of extremes: A hummingbird scaled to the size of a blue jay would be left hopelessly earthbound, fluttering its wings in the dust.

If gravity is the enemy of the large, surface tension is the terror of the small. Flies laugh at gravity, but dread water. As Haldane notes, “An insect going for a drink is in as great danger as a man leaning out over a precipice in search of food.” No wonder most insects are either unwettable, or do their drinking at a distance through a straw-like proboscis.

Thermoregulation is an issue for all organisms, which is why arctic beasts tend to be large while tropical critters are small. Consider a sphere: As it grows, the interior volume increases faster than the surface area of its outer skin. Small animals have lots of surface area relative to their volume, an advantage in the Torrid Zone where survival depends on efficient cooling. Back up in the arctic, the same ratio works in reverse: Large beasts rely upon a lower surface area ratio to help stay warm.

The power of Haldane’s rule is that it applies to far more than just organisms. Hidden laws of scale stalk humankind everywhere we turn. Like birds, the minimum power an aircraft requires to stay in flight increases faster than its weight. This is why large birds soar instead of flapping their wings—and Howard Hughes’ Spruce Goose never got more than a few feet off the water.

Size inevitably comes at a cost of ever-greater complexity. In Haldane’s words, “Comparative anatomy is largely the story of the struggle to increase surface in proportion to volume.” Which is why intestines are coiled and human lungs pack in a hundred square yards of surface area. Complexity is a Goldilocks tool for the large, widening the zone of “just right.”

But complexity can expand the envelope of “just right” only so far before bad things happen. Like the engine on an underpowered aircraft, the cost can be catastrophic. Everything from airplanes to institutions has an intrinsic right size, which we ignore at our peril. The 2008 banking crisis taught us that companies and markets are not exempt from Haldane’s rule. But we got the lesson backwards: It wasn’t a case of “too big to fail,” but rather “too big to succeed.” One cannot help but fret that in their globe-spanning success, mega-companies are flirting with the unforgiving limits of right size.

Our political institutions also cannot escape the logic of Haldane’s rule. The Greeks concluded that their type of democracy worked best in a unit no larger than a small city. Haldane adds that the English invention of representative government made possible a scale-up to large stable nation-states. Now it seems that the US and other nations are growing beyond their right size for their political systems. Meanwhile, globalization is stalling in a cloud of conflict and confusion precisely because no workable political structure right-sized for the entire planet exists. The turbulence of 2016 very likely is a mere prologue to more wrenching shifts ahead.

Haldane wrote decades before the advent of globalization and digital media, but his elegant rule of right size hovers behind our biggest challenges today. Can mega-cities be sustained? Have social networks scaled beyond the optimum size for sensible exchange? Has cyberspace become so large and complexly interdependent that it is at risk of catastrophic scale-driven failure? Is our human population outstripping its right size given the state of our planetary systems? As we face these and the myriad other surprises to come, we would do well to remember that mice bounce, horses splash—and size truly matters.

abigail_marsh's picture
Associate Professor of Psychology, Georgetown University

To alloparent is to provide care for offspring that are not your own. It is a behavior that is unimaginable for most species (few of which even care for their own offspring), rare even among relatively nurturant classes of animals like birds and mammals, and central to the existence of humankind. The vigor and promiscuity with which humans in every culture around the world alloparent stands in stark contrast to widespread misconceptions about who we are and how we should raise our children.

Humans’ survival as a species over the last 200,000 years has depended on our motivation and ability to care for one another’s children. Our babies are born as helpless and needy as it is possible for a living creature to be. The actress Angelina Jolie was once derided for describing her newborn as a “blob,” but she wasn’t far off. Human infants arrive into the world unable to provide the smallest semblance of care for themselves. Worse, over a decade will pass before a human child becomes self-sufficient—a decade during which that child requires intensive, around-the-clock feeding, cleaning, transport, protection, and training in hundreds or thousands of skills. No other species is on the hook for anywhere near the amount of care that we humans must provide our children.

Luckily, evolution never meant for us to do it alone. As the anthropologist Sarah Hrdy has described, among foraging cultures that best approximate our ancestral conditions, human babies never rely on only one person, or even two people, for care. Instead they are played with, protected, cleaned, transported, and fed (even nursed) by a wide array of relatives and other group members—as many as twenty different individuals every day, in some cases. And the more alloparenting children get, the more likely they are to survive and flourish.

You would never know any of this from reading most modern books on child development or childrearing. Attachment to and responsive care from a single primary caregiver (invariably the mother) is nearly always portrayed as the critical ingredient for a child’s optimal development. When fathers or other caregivers are mentioned at all, their impact is described as neutral at best. The implicit message is that for a baby to spend significant time apart from the mother in the care of other caregivers, like babysitters or daycare providers, is unnatural and potentially harmful.

But the opposite is more likely true. As the historian Stephanie Coontz has put it, human children “do best in societies where childrearing is considered too important to be left entirely to parents.” When children receive care from a network of loving caregivers, not only are mothers relieved of the nearly impossible burden of caring for and rearing a needy human infant alone, but their children gain the opportunity to learn from an array of supportive adults, to form bonds with them, and to learn to love and trust widely rather than narrowly.

Children are not the only beneficiaries of humans’ fulsome alloparenting capacities. Across primate species, the prevalence of alloparenting is also the single best predictor of a behavior that theories portraying human nature as motivated strictly by rational self-interest struggle to explain: altruism. Not reciprocal altruism or altruism toward close kin (which are self-interested) but costly acts of altruism for unrelated others, even strangers. This sort of altruism can seem inexplicable according to dominant accounts of altruism like reciprocity and kin selection. But it is perfectly consistent with the idea that, as alloparents sine qua non, humans are designed to be attuned to and motivated to care for a wide array of needy and vulnerable others. Altruism for one another is likely an exaptation of evolved neural mechanisms that equip us to alloparent.

Remember this if you are ever tempted to write off all humanity as a lost cause. We have our flaws, without a doubt, but we can also lay claim to being the species shaped by evolution to possess the most open hearts and the most abundant capacity for care on earth.

james_geary's picture
Deputy Curator, Nieman Foundation for Journalism at Harvard; Author, Wit's End

Charles Lamb once remarked that, when the time came for him to leave this earth, his fondest wish would be to draw his last breath through a pipe and exhale it in a pun. And he was indeed a prodigious punster. Once, when a friend, about to introduce the notoriously shy English essayist to a group of strangers, asked him, “Promise, Lamb, not to be so sheepish,” he replied, “I wool.”

Lamb and his close friend Samuel Taylor Coleridge shared a passion for punning, not just as a fireside diversion but as a model for the way the imaginative mind works. “All men who possess at once active fancy, imagination, and a philosophical spirit, are prone to punning,” Coleridge declared.

Coleridge considered punning an essentially poetic act, exhibiting sensitivity to the subtlest, most distant relationships, as well as an acrobatic exercise of intelligence, connecting things formerly believed to be unconnected. “A ridiculous likeness leads to the detection of a true analogy” is the way he explained it. The novelist and cultural critic Arthur Koestler picked up Coleridge’s idea and used it as the basis for his theory of creativity—in the sciences, the humanities, and the arts.

Koestler regarded the pun, which he described as “two strings of thought tied together by an acoustic knot,” as among the most powerful proofs of “bisociation,” the process of discovering similarity in the dissimilar that he suspected was the foundation for all creativity. A pun “compels us to perceive the situation in two self-consistent but incompatible frames of reference at the same time,” Koestler argued. “While this unusual condition lasts, the event is not, as is normally the case, associated with a single frame of reference, but bisociated with two.”

For Koestler, the ability to simultaneously view a situation through multiple frames of reference is the source of all creative breakthroughs.

Newton was bisociating when, as he sat in contemplative mood in his garden, he watched an apple fall to the ground and understood it as both the unremarkable fate of a piece of ripe fruit and a startling demonstration of the law of gravity. Cézanne was bisociating when he rendered his astonishing apples as both actual produce arranged so meticulously before him and as impossibly off-kilter objects that existed only in his brushstrokes and pigments. Saint Jerome was bisociating when, translating the Old Latin Bible into the simpler Latin Vulgate in the 4th century, he noticed that the adjectival form of "evil," malus, also happens to be the word for "apple," malum, and picked that word as the name of the previously unidentified fruit Adam and Eve ate.

There is no sharp boundary splitting the bisociation experienced by the scientist from that experienced by the artist, the sage or the jester. The creative act moves seamlessly from the "Aha!" of scientific discovery to the "Ah…" of aesthetic insight to the "Ha-ha" of the pun and the punchline. Koestler even found a place for comedy on the bisociative spectrum of ingenuity: “Comic discovery is paradox stated—scientific discovery is paradox resolved.” Bisociation is central to creative thought, Koestler believed, because “The conscious and unconscious processes underlying creativity are essentially combinatorial activities—the bringing together of previously separate areas of knowledge and experience.”

Bisociation is a form of improvised, recombinant intelligence that integrates knowledge and experience, fuses divided worlds, and links the like with the unlike—a model and a metaphor for the process of discovery itself. The pun is at once the most profound and the most pedestrian example of bisociation at work.

steve_omohundro's picture
Scientist, Self-Aware Systems; Co-founder, Center for Complex Systems Research

If something doesn't make sense, your go-to hypothesis should be "costly signalling." The core idea is more than a century old, but new wrinkles deserve wider exposure. Veblen's "conspicuous consumption" explained why people lit their cigars with $100 bills as a costly signal of their wealth. Later economists showed that a signal of a hidden trait becomes reliable if the cost of faking it is more than the expected gain. For example, Spence showed that college degrees (even in irrelevant subjects) can reliably signal good future employees because they are too costly for bad employees to obtain.

Darwin said, "The sight of a feather in a peacock's tail, whenever I gaze at it, makes me sick!" because he couldn't see its adaptive benefit. It makes perfect sense as a costly signal, however, because the peacock has to be quite fit to survive with a tail like that! Why do strong gazelles waste time and energy "stotting" (jumping vertically) when they see a cheetah? That is a costly signal of their strength and the cheetahs chase after other gazelles. Biologists came to accept the idea only in 1990 and now apply it to signalling between parents and offspring, predator and prey, males and females, siblings, and many other relationships.

Technology is just getting on the bandwagon. The integrity of the cryptocurrency "bitcoin" is maintained by bitcoin "miners" who get paid in bitcoin. The primary deception risk is "sybil attacks," where a single participant pretends to be many miners in an attempt to subvert the network's integrity. Bitcoin counters this by requiring miners to solve costly cryptographic puzzles in order to add blocks to the blockchain. Bitcoin mining currently burns up a gigawatt of electricity, which is about a billion dollars a year at US rates. Venezuela is in economic turmoil and some starving citizens are resorting to breaking into zoos to eat the animals. At the same time, enterprising Venezuelan bitcoin miners are using the cheap electricity there to earn $1200 per day. Notice the strangeness of this: By proving they have uselessly burned up precious resources, they cause another country to send them food!

As a grad student in Berkeley, I used to wonder why a preacher would often preach on the main plaza. Every time he would be harassed by a large crowd and I never saw him gain any converts. Costly signalling explains that preaching to that audience was a much better signal of his faith and commitment than would preaching to a more receptive audience. In fact, the very antagonism of his audience increased the cost and therefore the reliability of his signal.

A similar idea is playing out today in social media. Scott Alexander points out that the animal rights group PETA is much better known than the related group Vegan Outreach. PETA makes outrageous statements and performs outrageous acts which generate a lot of antagonism and are therefore costly signals. They have thrown red paint on women wearing furs and offered to pay Detroit water bills for families who agree to stop eating meat. They currently have a campaign to investigate the Australian who punched a kangaroo to rescue his dog. Members who promote ambiguous or controversial positions signal their commitment to their cause in a way that more generally accepted positions would not. For example, if they had a campaign to prevent the torture of kittens, everyone would agree and members wouldn't have a strong signal of their commitment to animal rights.

This connects to meme propagation in an interesting way. Memes that everyone agree with typically don't spread very far because they don't signal anything about the sender. Nobody is tweeting that "2+2=4." But controversial memes make a statement. They cause people with an opposing view to respond with opposing memes. As CGP Grey beautifully explained, opposing memes synergistically help each other to spread. They also create a cost for the senders in the form of antagonistic pushback from believers in the opposing meme. But from the view of costly signalling, this is good! If you have enemies attacking you for your beliefs, you better demonstrate your belief and commitment by spreading them even more! Both sides get this boost of reliable signalling and are motivated to intensify meme wars.

One problem with all this costly signalling is that it's costly! Peacocks would do much better if they didn't have to waste resources on their large tails. Bitcoin would be much more efficient if it didn't burn up the electricity of a small country. People could be more productive if they didn't have endless meme wars to demonstrate their commitments.

Technology may be able to help us with this. If hidden traits could be reliably communicated, there would be no need for costly signals. If a peacock could show a peahen his genetic sequence, she wouldn't have to choose him based on his tail. If wealthy people could reliably reveal their bank accounts, they wouldn't need luxury yachts or fancy cars. If bitcoin miners could show they weren't being duplicitous, we could forget all those wasteful cryptographic puzzles. With proper design, AI systems can reliably reveal what's actually on their minds. And as our understanding of biology improves, humans may be able to do the same. If we can create institutions and infrastructure to support truthful communication without costly signalling, the world will become a much more efficient place. Until then, it's good to be aware of costly signalling and to notice it acting everywhere!

bruno_giussani's picture
European Director and Global Curator, TED

It is not clear whether it was rice or wheat. We're also not sure of the origin of the story, for there are many versions. But it goes something like this: A king was presented with a beautiful game of chess by its inventor. So pleased was the king that he asked the inventor to name his own reward. The inventor modestly asked for some rice (or wheat). The exact quantity would be calculated through the simplest formula: put a grain on the first square, two on the second, four on the third, and so on doubling the number of grains until the sixty-fourth and last square. The king readily agreed, before realizing that he had been deceived. By mid-chessboard, his castle was barely big enough to contain the grains, and just the first square of the other half would again double that.

The story has been used by anyone from 13th century Islamic scholars to scientist/author Carl Sagan to social media videographers to explain the power of exponential sequences, where things begin small, very small, but then, once they start growing, they grow faster and faster—to paraphrase Ernest Hemingway: they grow slowly, then suddenly.

The idea of "exponential" and its ramifications ought to be better known and understood by everyone (and the chessboard fable is a useful metaphor) because we live in an exponential world. It has been the case for a while, actually. However, so far we have been in the first half of the board—and things are radically accelerating now that we're entering its second half.

The "second half of the chessboard" is a notion put forth by Ray Kurzweil in 1999 in his book The Age of Spiritual Machines. He suggests that while exponentiality is significant in the first half of the board, it is when we approach the second half that its impacts become massive, things get crazy, and the acceleration starts to elude most humans' imagination and grasp.

Fifteen years later, in The Second Machine Age, Andrew McAfee and Erik Brynjolfsson looked at Kurzweil's suggestion in relation to Moore's law. Gordon Moore is a cofounder of Fairchild and Intel, two of the pioneering companies of Silicon Valley. Reflecting on the first few years of development of silicon transistors, in 1965 Moore made a prediction that computing power would double roughly every eighteen- to twenty-four months for a given cost (I'm simplifying here). In other words, that it would grow exponentially. His prediction held for decades, with huge technological and business impacts, although the pace has been slowing down a bit in recent years—it's worth mentioning that Moore's was an insight, not a physical law, and that we're likely moving away from transistors to a world of quantum computing, which relies on particles instead to perform calculations.

McAfee and Brynjolfsson reckon that if we put the starting point of Moore's law in 1958, when the first silicon transistors were commercialized, and we follow the exponential curve, in digital technology terms we entered the second half of the chessboard sometime around 2006 (for context, consider that the first mapping of the human genome was completed in 2003, the operating systems for our smartphones of today were launched in 2007, the same year IBM's Deep Blue beat Garry Kasparov in chess, while scientists at Yale created the first solid-state quantum processor in 2009).

Hence we find ourselves right now somewhere in the first, maybe in the second square of the second half of the chessboard. This helps make sense of the dramatically fast advances that we see happen in science and technology, from smartphones and language translation and the blockchain to big analytics and self-driving vehicles and artificial intelligence, from robotics to sensors, from solar cells to biotech to genomics and neuroscience and more.

While each of these fields is by itself growing exponentially, their combinatorial effect—the accelerating influence that each has on others is prodigious. Add to that the capacity for self-improvement of artificial intelligence systems, and we are talking about almost incomprehensible rates of change.

To stay with the original metaphor, this is what entering the second half of the chessboard means: Until now we were accumulating rice grains at an increasingly fast pace, but we were still within the confines of the king's castle. The next squares will inundate the city, and then the land and the world. And there are still thirty-two squares to go, so this is not going to be a brief period of transformation. It is going to be a long, deep, unprecedented upheaval. These developments may lead us to an age of abundance and a tech-driven renaissance, as many claim and/or hope, or down an uncontrollable dark hole, as others fear.

Yet we still live for the most part in a world that doesn't understand, and is not made for, exponentiality. Almost every structure and method we have developed to run our societies—governments, democracy, education and healthcare systems, legal and regulatory frameworks, the press, companies, security and safety arrangements, even science management itself—is designed to function in a predictable, linear world, and it interprets sudden spikes or downturns as crisis. It is therefore not surprising that the exponential pace of change is causing all sorts of disquiet and stresses, from political to social to psychological, as we are witnessing almost daily.

How do we learn to think exponentially without losing depth, careful consideration and nuance? How does a society function in a second-half-of-the-chessboard reality? What do governance and democracy mean in an exponential world? How do we rethink everything from education to legal frameworks to notions of ethics and morals?

It starts with a better understanding of exponents and of the metaphor of the "second half of the chessboard." And with applying "second-half thinking" to pretty much everything. 

carlo_rovelli's picture
Theoretical Physicist; Aix-Marseille University, in the Centre de Physique Théorique, Marseille, France; Author, Helgoland; There Are Places in the World Where Rules Are Less Important Than Kindness

Everybody knows what “information” is. It is the stuff that overabounds online; which you ask the airport kiosk when you don’t know how to get downtown; or which is stored in your USB sticks. It carries meaning. Meaning is interpreted in our head, of course. So, is there anything out there which is just physical, independent from our head, which is information?

Yes. It is called “relative information.” In nature, variables are not independent; for instance, in any magnet, the two ends have opposite polarities. Knowing one amounts to knowing the other. So we can say that each end “has information” about the other.  There is nothing mental in this; it is just a way of saying that there is a necessary relation between the polarities of the two ends. We say that there is "relative information" between two systems anytime the state of one is constrained by the state of the other. In this precise sense, physical systems may be said to have information about one another, with no need for a mind to play any role. 

Such "relative information" is ubiquitous in nature: The color of the light carries information about the object the light has bounced from; a virus has information about the cell it may attach; and neurons have information about one another. Since the world is a knit tangle of interacting events, it teams with relative information.  

When this information is exploited for survival, extensively elaborated by our brain, and maybe coded in a language understood by a community, it becomes mental, and it acquires the semantic weight that we commonly attribute to the notion of information. 

But the basic ingredient is down there in the physical world: physical correlation between distinct variables. The physical world is not a set of self-absorbed entities that do their selfish things. It is a tightly knitted net of relative information, where everybody’s state reflects somebody else’s state. We understand physical, chemical, biological, social, political, astrophysical, and cosmological systems in terms of these nets of relations, not in terms of individual behavior. Physical relative information is a powerful basic concept for describing the world. Before “energy,” “matter,” or even “entity.”

This is why saying that the physical world is just a collection of elementary particles does not capture the full story. The constraints between them create the rich web of reciprocal information. 

Twenty-four centuries ago Democritus suggested that everything could be just made of atoms. But he also suggested that the atoms are “like the letters of the alphabet”: There are only twenty or so letters but, as he puts it, “It is possible for them to combine in diverse modes, in order to produce comedies or tragedies, ridiculous stories or epic poems.” So is nature: Few atoms combine to generate the phantasmagoric variety of reality. But the analogy is deeper: The atoms are like an alphabet because the way in which they are arranged is always correlated with the way other atoms are arranged. Sets of atoms carry information.

The light that arrives at our eyes carries information about the objects which it has played across; the color of the sea has information on the color of the sky above it; a cell has information about the virus attacking it; a new living being has plenty of information because it is correlated with its parents, and with its species; and you, dear reader, reading these lines, receive information about what I am thinking while writing them, that is to say, about what is happening in my mind at the moment in which I write this text. What occurs in the atoms of your brain is not any more independent from what is happening in the atoms of mine: we communicate. 

The world isn’t just a mass of colliding atoms; it is also a web of correlations between sets of atoms, a network of reciprocal physical information between physical systems.

gino_segre's picture
Professor of Physics & Astronomy, University of Pennsylvania; Author, The Pope of Physics: Enrico Fermi and the Birth of the Atomic Age

Humans as well as most other mammals detect, from birth onward, electromagnetic radiation in the form of visible light. But it was not until the 1887 experiments by Heinrich Hertz that scientists accepted that what they were seeing was nothing but a frequency band of the electromagnetic radiation generated by accelerating electric charges. These experiments confirmed the prediction James Clerk Maxwell had made less than three decades earlier.

The realization opened the door to studying electromagnetic radiation of all frequencies, from radio waves to X-rays and beyond. It also stimulated scientists to ask if an entirely different form of radiation might ever possibly be observed. Accelerating masses rather than electric charges would be its source and it would be called gravitational radiation.

Maxwell’s equations had shown what form electromagnetic radiation would take. Albert Einstein’s 1916 general theory of relativity offered a prediction for gravitational radiation. But since the gravitational force is so much weaker than the electromagnetic one, there was doubt that a direct observation of this radiation would ever be possible.

But the seemingly impossible has taken place! In an experiment of almost unimaginable difficulty, the LIGO (Laser Interferometer Gravitational-Wave Observatory) reported that they detected on September 15th, 2015, at 9:50:45 GMT (Greenwich Meridian Time), a signal that corresponded to two black holes. They were 1.3 billion light-years away, one with a little more than thirty solar masses and the other a little less, spiraling into one another to form a larger black hole. Theory predicted that the equivalent of three solar masses had been emitted as gravitational waves in the last second of the two stars’ death-spiral.

The general theory of relativity tells us that gravity is a curvature in space caused by the presence of masses and energy. The signal for gravitational radiation or alternatively gravitational waves is therefore a deformation in the space of a detector which the waves are passing through. The LIGO detectors, one near Livingston, Louisiana and the other near Richland, Washington consist of L-shaped structures containing a vacuum tube in which a laser beam travels back and forth from one arm to the other.

In distorting space, the gravitational waves from the spiraling black holes altered the relative length of the detector’s two arms by approximately a billionth of a billionth of a meter, an incredibly small distance but sufficient to change the interference pattern of the laser light recombining at the detector’s nexus.

The need for two detectors was obvious. Despite every effort to eliminate backgrounds such a miniscule effect would be hard to take seriously unless it had been observed simultaneously in two widely separated locations. This was the case; the detection at Livingston and Richland was separated by only seven milliseconds.

Four hundred years of observational astronomy, from Galileo’s telescope to the Hubble Space Telescope, has enriched our view of the universe immeasurably. The study of gravitational radiation offers the potential for us to take the next step. The universe only becomes transparent to electromagnetic radiation when the universe has cooled sufficiently for atoms to form, some 380,000 years after the Big Bang. Gravitational radiation does not suffer any such limitations. Some day we may even use it as a tool to observe the inflationary expansion the universe is presumed to have undergone immediately after the Big Bang.

We are at the dawn of a new era in astronomy. If all goes well and the importance of studying gravitational radiation is appreciated, twenty years from now we will anxiously be waiting reports from LISA (Laser Interferometer Space Antenna), interferometers separated by five million kilometers.

charles_seife's picture
Professor of Journalism, New York University; Former Journalist, Science Magazine; Author, Hawking Hawking

A city-slicker statistician was driving through the backwoods of rural Texas, so the story goes, when she slammed on the brakes. There, right by the side of the road, was a barn that bore witness to a nigh-impossible feat of marksmanship. The barn was covered with hundreds of neat little white bullseyes, each of which was perforated with a single bullet hole in the dead center.

The incredulous statistician got out of the car and examined the barn, muttering about Gaussian distributions and probability density functions. She didn't even notice that Old Joe had sidled up to her, ancient Winchester slung over his shoulder.

Joe cleared his throat. With a start, the statistician looked at Joe with amazement. "Four hundred and twelve targets, every single one of which is hit in the center with less than 2 percent deviation every time... the odds against that are astronomical! You must be the most accurate rifleman in history. How do you do it?"

Without a word, Joe walked ten paces away from the barn, spun round, raised his rifle, and fired. A slug thunked dully into the wood siding. Joe casually pulled a piece of chalk out of his overalls as he walked back toward the barn, and, after finding the hole where his bullet had just struck, drew a neat little bullseye around it.

There are far too many scientists who have adopted the Texas Sharpshooter's methods, and we're beginning to feel the effects. For example, there's a drug on the market—just approved—to treat Duchenne muscular dystrophy, an incurable disease. Too bad it doesn't seem to work.

The drug, eteplirsen, received a lot of fanfare when researchers announced that the drug had hit two clinical bullseyes: It increased the amount of a certain protein in patients' muscle fibers, and patients did better on a certain measure known as the six-minute walk test (6MWT). The drug was effective! Or so it seemed, if you didn't know that the 6MWT bullseye was painted on the wall well after the study was underway. Another bullseye that was drawn before the study started, the number of certain white blood cells in muscle tissue, was hastily erased. That's almost certainly the sign of a missed target. 
Looking at all the data and all the scientists' prognostications makes it fairly clear that the drug didn't behave the way researchers hoped. Eteplirsen's effectiveness is highly questionable, to put it mildly. Yet it was trumpeted as a big breakthrough and approved by the FDA. Patients can now buy a year's supply for about $300,000.

Texas-style sharpshooting—moving the goalposts and cherry-picking data so that results seem significant and important when they're not—is extremely common; check out any clinical trials registry and you'll see just how frequently endpoints are tinkered with. It goes almost without saying that a good number of these changes effectively turn the sow's ear of a negative or ambiguous result into the silk purse of a scientific finding worthy of publication. No wonder then that many branches of science are mired in replicability crises; there's no replicating a finding that's the result of a bullseye changing positions instead of reflecting nature's laws.

The Texas Sharpshooter problem should be more widely known not just by scientists, so that we can move toward a world with more transparency about changing protocols and unfixed endpoints, but also by the public. Maybe, just maybe, that will make us a little less impressed by the never-ending procession of supposed scientific marksmen—researchers whose results are little more permanent than chalk marks on the side of a barn.

laura_betzig's picture
Anthropologist; Historian

The gist of the “ideal free distribution” is that individuals in the best of all possible worlds should distribute themselves in the best of all possible ways. They should sort themselves out across space and time so as to avoid predators, find prey, get mates, and leave as many descendants as they can behind. Where information is imperfect, the best spots will be missed; and where mobility is blocked, distributions will be “despotic.” But where information is unlimited and mobility is unrestrained, distributions will be “ideal” and “free.”

The idea is intuitively obvious, and it has predictive power. It works for aphids. It works for sticklebacks. And it works for us.

Over most of the long stretch of the human past, our distributions were more or less ideal-free. We usually moved around with our prey, following the plants we collected and the animals we tracked. Some foragers, even now, are more footloose than others: On the Kalahari, hunters with access to waterholes, or n!oresi, make more claims to being a big man, or n!a, and have bigger families. But across continents—from Africa to the Americas to Australia—the most reproductively successful forager fathers raise children in the low double digits; and the most reproductively successful forager mothers do too. Genetic evidence backs that up. In contemporary populations including the Khoisan, elevated levels of diversity on the human X chromosome, and very low diversity on the human Y, suggest consistent sex differences in reproductive variance. A larger fraction of the female population has reproduced; a larger fraction of the male population has not. But overall, those differences are small.

After plants and animals were domesticated across the Fertile Crescent, our distributions became more despotic. Agriculture spread up and down the Nile Valley; and within a millennium Menes, who founded the first dynasty, had created an empire that endured for around 3000 years. When the 18th-dynasty pharaoh Amenhotep III married a daughter of the king of Mitanni, she brought along a harem of 317 women, with their hand-bracelets, foot-bracelets, earrings and toggle-pins, says a letter dug up at Amarna; and from ostraca and scarabs in and beyond the 19th-dynasty pharaoh Rameses II’s tomb, archaeologists have uncovered the names of ninety-six sons. From one end of the map to the other—from Egypt, to Mesopotamia, to the Ganges, to the Yellow River, to the Valley of Mexico, to the Andes—overlords rose up wherever the subjected were trapped, and they left behind hundreds of daughters and sons. Again the genetic evidence matches. Geographically diverse samples of Y chromosome sequences suggest a couple of reproductive bottlenecks over the course of human evolution. One from around 40,000 to 60,000 years ago coincides with our moves out of Africa into Eurasia, when the effective breeding population among women became more than twice the effective breeding population among men. A second bottleneck, from around 8,000 to 4,000 years ago, overlaps with the Neolithic, when effective breeding population among women became seventeen times the effective breeding population among men.

Then in 1492, Columbus found a New World. And over the last half-millennium, the flow of bodies, and the flow of information, grew in unprecedented ways. Despotisms collapsed. And distributions from sea to sea became relatively ideal, and free.

Our pursuit of the ideal free distribution is as old as we are. We’ve always pushed back against ignorance, and forward against borders. From our migrations around Africa; to our migrations out of Africa and into Eurasia; to our migrations out of Europe, Africa and Asia into the Americas; to our first intrepid earth orbits aboard Friendship 7, and beyond; we’ve risked our lots for a better life. I hope we never stop.

phil_rosenzweig's picture
Professor of Strategy and International Business at IMD, Lausanne, Switzerland; Author, Left Brain, Right Stuff

A few years ago, a reporter at a leading financial daily called with an intriguing question: “We’re doing a story about decision making, and asking researchers whether they follow their own advice.” I must have chuckled because she continued: “I just talked with one man who said, ‘For heaven’s sake, you don’t think I actually use this stuff, do you? My decisions are as bad as everyone else’s!’”

I suspect my colleague was being ironic, or perhaps tweaking the reporter. I gave a few examples I have found useful—sunk cost fallacy, regression toward the mean, and more—but then focused on one concept that everyone should understand: Type I and Type II errors, or false positives and false negatives.

The basic idea is straightforward: We should want to go ahead with a course of action when it will succeed and refrain when it will fail; accept a hypothesis when it is true and reject when it is false; convict a defendant who is guilty and free one who is innocent. Naturally, we spend time and effort to improve the likelihood of making the right decision. But since we cannot know for sure we will be correct, we also have to consider the possibility of error. Type I and Type II thinking forces us to identify the ways we can err, and to ask which error we prefer.

In scientific research, we want to accept new results only when they are demonstrably correct. We wish to avoid accepting a false result, or minimize the chance of a Type I error, even if that means we commit a Type II error and fail to accept results that turn out to be true. That’s why we insist that claims are supported by evidence that is statistically significant, often set (by convention) as the probability an observation could be due to random effects is less than one in twenty (p < .05) or less than one in a hundred (p < .01). (How we know the probability distribution in the first place leads us into the debate between frequentists and Bayesians, an exceedingly interesting question but beyond the scope of this note.)

Similarly, in criminal trials we want to convict a defendant only when we are very certain of guilt. Here again a Type II error is far preferable to Type I error, a view expressed in Blackstone’s law, which says it is better to let ten guilty men go free than to convict an innocent man, since the severity of a false positive is not only great but perhaps irreversible. The preference for Type II error is reflected in cornerstones of Anglo-Saxon law such as the presumption of innocence and burden of proof resting with the prosecution.

In other settings the greater danger is to commit a Type II error. Consider competition in business, where rival firms seek payoffs (revenues and profits) that accrue disproportionately to a few, or perhaps are winner-take-all. Although bold action may not lead to success, inaction almost inevitably leads to failure—because there is a high likelihood that some rival will act and succeed. Hence the dictum made famous by Intel’s Andy Grove: Only the paranoid survive. Grove did not say that taking risky action predictably leads to success; rather, he observed that in situations of intense competition, those who survive will have taken actions that involved high risk. They will have understood when it comes to technological breakthroughs or launching of new products, it is better to act and fail (Type I error) than fail to act (Type II error).  

To summarize, a first point is that we should not only consider the outcome we desire, but also the errors we wish to avoid. A corollary is that different kinds of decisions favor one or the other. A next point is that for some kinds of decision, our preference may shift over time. A young adult who is considering marriage may prefer a Type II error, finding it more prudent to wait rather than to marry too soon. As the years pass, he or she may be more willing to marry even if the circumstances do not seem perfect—a Type I error—rather than never marry—a Type II error.

There is a final point, as well: Discussion of Type I and Type II errors can reveal different preferences among parties involved in a decision. To illustrate, expeditions to the summit of Mount Everest are very risky in the best of times. Of course, climbers want to push on if they can reach the summit safely, and will want to turn back if pressing on results in death. That’s the easy part. More important is to discuss in advance preferences for error: Would they rather turn back when they could have made it (Type II) or keep going and die in the effort (Type I)? If the choice seems obvious to you, it may be because you have not invested the time and money to get into the position of fulfilling a dream—nor in a state of exhaustion and oxygen deprivation when the moment of decision arrives. As past Everest expeditions have shown, sometimes tragically, members of a team—leaders, guides, and climbers—should do more than focus on the outcomes they most desire. They should also discuss the consequences of error.  

steve_fuller's picture
Philosopher; Auguste Comte Chair in Social Epistemology, University of Warwick; Author, The Proactionary Imperative: A Foundation for Transhumanism

Adaptive preference is a concept that illuminates what happens both in the lab and in life. An adaptive preference results when we bend aspiration towards expectation in light of experience. We come to want what we think is within our grasp. More than a simple "reality check," adaptive preference formation involves disciplining one’s motivational structure with the benefit of hindsight. Much of what passes for "wisdom" in life is about the formation of adaptive preferences.

When the social psychologist Leon Festinger suggested the idea in the 1950s, it provided a neat account of how people maintain a sense of autonomy while under attack by events beyond their control. He might have been talking about how the US and the USSR held their nerve in Cold War vicissitudes, but in fact he was talking about how a millennial religious cult continued to flourish even after its key doomsday prediction had failed to materialize.

In the 1980s, the social and political philosopher Jon Elster brilliantly generalized the idea of adaptive preference in terms of the complementary phenomena of "sour grapes" and "sweet lemons":  We tend to downgrade the value of previously desired outcomes as their realization becomes less likely and upgrade the value of previously undesired outcomes as their realization becomes more likely.

The interesting question is whether adaptive preference formation is rational. Festinger’s original case study seemed to imply an answer of no. After a few hours of doubt and despair, the cult regrouped by interpreting the deity’s failure to end the world as a sign that the cult had done sufficient good to reverse their fate. This emboldened them to proselytise still more vigorously.

One might think that had the cult responded rationally to the failed prophecy, they would have simply abandoned any belief that they were in a special relationship with a higher deity. Instead the cult did something rather subtle. They did not make the obvious "irrational" move of denying that the prophecy had failed or postponing doomsday to a later date. Rather, they altered their relationship to the deity, who previously appeared to claim that there was nothing humans could do to reverse their fate. The terms of this renegotiated relationship then gave the cult members a sense of control over their lives which served to renew their missionary zeal.

This is an instance of what Elster called "sweet lemons," and it is not as obviously irrational as its counterpart, "sour grapes." In fact, a "sweet lemon detector," so to speak, may be a key element of the motivational structure of people who are capable of deep learning from negative experience. Such people come to acquire a clearer sense of what they have truly valued all along so that they are reinvigorated by adversity.

The phenomenon of sweet lemons is disorienting to the observer because it highlights just how much we presume that others share our overarching values. We do not simply respect the autonomy of others. We also expect, somewhat paradoxically, that by virtue of their autonomy they will become more like us.  Thus, the post-prophecy behavior of Festinger’s cult is confusing because they carried on in a version of what they had previously done. In so doing, they learned from experience. But what they learned was to become more like themselves.

Adaptive preferences are arguably scalable, perhaps even to the level of entire cultures and species. A striking feature of human history is that widespread disruption and destruction do not necessarily result in people avoiding the precipitating behaviors in the future. For example, within a half-century of mass-produced automobiles, the original objections to their introduction had been realized: The cars themselves were a major source of air and noise pollution. The roads required for cars ravaged the environment and alienated their drivers from nature.

Yet none of that seemed to matter—or at least not enough to lead people to abandon automobiles. Rather, car production worldwide has continued apace while becoming a bit more environment-friendly to avoid the worst envisaged outcomes. For better or worse, we still appear to buy the value package that Henry Ford and others were selling in the early 20th century: We value the car’s freedom and speed not only over the connectedness to nature offered by the horse as presented in Ford’s day but also the relatively low ecological impact offered by mechanized public transport today.

Had Ford not introduced the mass-produced car in the early 20th century, humanity might not have discovered just how much it valued personal mobility. At least, that’s how it looks from the "sweet lemons" version of adaptive preference. Whether a general policy of sweet lemons lets us survive in the future is an open question. But if we do become extinct, it is likely to have been a byproduct of our trying to be better versions of what experience had taught us to believe we are. 

david_christian's picture
Director, Big History Institute and Distinguished Professor in History, Macquarie University, Sydney; Author, Origin Story

The idea of the “Noösphere,” or “the sphere of mind,” emerged early in the 20th century. It flourished for a while, then vanished. It deserves a second chance.

The Noösphere belongs to a family of concepts describing planetary envelopes or domains that have shaped the earth’s history: biosphere, hydrosphere, atmosphere, lithosphere, and so on. The idea of a distinct realm of mind evolved over several centuries. The 18th-century French naturalist, Buffon, wrote of a “realm of man” that was beginning to transform the earth’s surface. Nineteenth-century environmental thinkers such as George Perkins Marsh tried to measure the extent of those transformations, and Alexander von Humboldt declared that our impact on the planet was already “incalculable.”

The word “Noösphere” emerged in Paris, in 1924, from conversations between the Russian geologist, Vladimir Vernadsky, and two French scholars, the paleontologist and priest, Teilhard de Chardin, and the mathematician, Édouard Le Roy. In a lecture he gave in Paris in 1925, Vernadsky had already described humanity and the collective human mind as a new “geological force,” by which he seems to have meant a force comparable in scale to mountain building or the movement of continents.

In a 1945 essay, Vernadsky described the Noösphere as: “a new geological phenomenon on our planet. In it for the first time, man becomes a large-scale geological force.” As one of many signs of this profound change, he noted the sudden appearance on earth of new minerals and purified metals: “that mineralogical rarity, native iron, is now being produced by the billions of tons. Native aluminum, which never before existed on our planet, is now produced in any quantity.” In the same essay, published in the year of his death, and fifteen years before Yuri Gagarin became the first human to enter space, Vernadsky wrote that the Noösphere might even launch humans “into cosmic space.”

Unlike Gagarin, the idea of a Noösphere did not take off, perhaps because of the taint of vitalism. Both de Chardin and Le Roy were attracted to Henri Bergson’s idea that evolution was driven by an “Élan vital,” a “vital impulse” or “vital force.” Vernadsky, however, was not tempted by vitalism in any form. As a geologist working in the Soviet Union he seems to have been a committed materialist.                                

Today, it is worth returning to the idea of a Noösphere in its non-vitalist, Vernadskyian, version. Vernadsky is best known for developing the idea of a “biosphere,” a sphere of life that has shaped planet earth on geological time scales. His best-known work on the subject is The Biosphere. Today, the idea seems inescapable, as we learn of the colossal role of living organisms in creating an oxygen-rich atmosphere, in shaping the chemistry of the oceans, and in the evolution of minerals and rock strata such as limestones.

The sphere of mind evolved within the biosphere. All living organisms use information to tap flows of energy and resources, so in some form we can say that “mind” had always played a role within the biosphere. But even organisms with developed neurological systems and brains foraged for energy and resources individually, in pointillesque fashion. It was their cumulative impact that accounted for the growing importance of the biosphere. Humans were different. They didn’t just forage for information; they domesticated it, just as early farmers domesticated the land, rivers, plants and animals that surrounded them. Like farming, domesticating information was a collective project. The unique precision and bandwidth of human language allowed our ancestors to share, accumulate and mobilize information at the level of the community and, eventually, of the species, and to do so at warp speed. And increasing flows of information unlocked unprecedented flows of energy and resources, until we became the first species in four billion years that could mobilize energy and resources on geological scales. “Collective learning” made us a planet-changing species.

Today, students of the Anthropocene can date when the Noösphere became the primary driver of change on the surface of planet earth. It was in the middle of the 20th century. So Vernadsky got it more or less right. The sphere of mind joined the pantheon of planet-shaping spheres just over fifty years ago. In just a century or two, the Noösphere has taken its place alongside the other great shapers of our planet’s history: cosmos, earth and life.

Freed of the taint of vitalism, the idea of a Noösphere can help us get a better grip on the Anthropocene world of today.

jerry_a_coyne's picture
Professor Emeritus, Department of Ecology and Evolution, University of Chicago; Author, Why Evolution is True; Faith Versus Fact: Why Science and Religion are Incompatible.

A concept that everyone should understand and appreciate is the idea of physical determinism: that all matter and energy in the universe, including what’s in our brain, obey the laws of physics. The most important implication is that is we have no “free will”: At a given moment, all living creatures, including ourselves, are constrained by their genes and environment to behave in only one way—and could not have behaved differently. We feel like we make choices, but we don’t. In that sense, “dualistic” free will is an illusion.

This must be true from the first principles of physics. Our brain, after all, is simply a collection of molecules that follow the laws of physics; it’s simply a computer made of meat. That in turn means that given the brain’s constitution and inputs, its output—our thoughts, behaviors and “choices”—must obey those laws. There’s no way we can step outside our mind to tinker with those outputs. And even molecular quantum effects, which probably don’t even affect our acts, can’t possibly give us conscious control over our behavior.  

Physical determinism of behavior is also supported by experiments that trick people into thinking they’re exercising choice when they’re really being manipulated. Brain stimulation, for instance, can produce involuntary movements, like arm-waving, that patients claim are really willed gestures. Or we can feel we’re not being agents when we are, as with Ouija boards. Further, one can use fMRI brain scans to predict, with substantial accuracy, people’s binary decisions up to ten seconds before they’re even conscious of having made them.

Yet our feeling of volition—that we can choose freely, for instance, among several dishes at a restaurant—is strong: so strong that I find it harder to convince atheists that they don’t have free will than to convince religious believers that God doesn’t exist. Not everyone is religious, but all of us feel that we could have made different choices.

 
Why is it important that people grasp determinism? Because realizing that we can’t “choose otherwise” has profound implications for how we punish and reward people, especially criminals. It can also have salubrious effects on our thoughts and actions.   

First, if we can’t choose freely, but are puppets manipulated by the laws of physics, then all criminals or transgressors should be treated as products of genes and environments that made them behave badly. The armed robber had no choice about whether to get a gun and pull the trigger. In that sense, every criminal is impaired. All of them, whether or not they know the difference between right and wrong, have the same excuse as those deemed “not guilty by reason of insanity.”

Now this doesn’t mean that we shouldn’t punish criminals. We should—in order to remove them from society when they’re dangerous, reform them so they can rejoin us, and deter others from apeing bad behavior. But we shouldn’t imprison people as retribution—for making a “bad choice.” And of course we should still reward people, because that rewires their own brains, and those of onlookers, in a way that promotes good behavior. We can therefore retain the concept of personal responsibility for actions, but reject the idea of moral responsibility, which presumes that people can choose to do good or bad.

Beyond crime and punishment, how should the idea of determinism transform us? Well, understanding that we have no choices should create more empathy and less hostility towards others when we grasp that everyone is the victim of circumstances over which they had no control. Welfare recipients couldn’t have gotten jobs, and jerks had no choice about becoming jerks. In politics, this should give us more empathy for the underprivileged. And realizing that we had no real choices should stave off festering regrets about things we wished we had done differently. We couldn’t have.  

Many religions also depend critically on this illusory notion of free will. It’s the basis, for instance, for Christian belief that God sends people to heaven or hell based on whether they “choose” to accept Jesus as their savior. Also out the window is the idea that evil exists because it’s an unfortunate but necessary byproduct of the free will that God gave us. We have no such will, and without it the Abrahamic religions dissolve into insignificance.

We should accept the determinism of our behavior because, though it may make us uncomfortable, it happens to be true—just as we must accept our own inevitable but disturbing mortality. At the same time, we should dispel the misconceptions about determinism that keep many from embracing it: that it gives us license to behave how we want, that it promotes lassitude and nihilism, that it means we can’t affect the behavior of others, and that embracing determinism will destroy the fabric of society by making people immoral. The fact is that our feeling that we have free will, and our tendency to behave well, are so strong—probably partly ingrained by evolution—that we’ll never feel like the meat robots we are. Determinism is neither dangerous nor dolorous.

There are some philosophers who argue that while we do behave deterministically, we can still have a form of free will, simply redefining the concept to mean things like “our brains are very complex computers” or “we feel we are free.” But those are intellectual carny tricks. The important thing is to realize that we don’t have any choice about what we do, and we never did. We can come to terms with this, just as we come to terms with our mortality. Though we may not like such truths, accepting them is the beginning of wisdom.  

nicholas_a_christakis's picture
Sterling Professor of Social and Natural Science, Yale University; Co-author, Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives

There is an old word in our language, equipoise, which has been around since at least the 16th century—when it meant something like “an equal distribution of weight.” With respect to science, it means, roughly, standing at the foot of a valley and not knowing which way is best to proceed to get up high—poised between alternative theories and ideas about which, given current information, one is neutral.

Use of the word peaked around 1840, and declined roughly five-fold since then, according to Google Ngram, though it appears to be enjoying an incipient resurgence in the last decade. But attention to equipoise ought to be greater. 

The concept found a new application in the 1980s, when ethicists were searching for deep justifications for the conduct of randomized clinical trials in medicine. A trial was only justified, they rightly argued, when the doctors and researchers doing the trial (and the medical knowledge they were relying on) saw the new drug or its alternative (a placebo, perhaps) as potentially equally good. If those doing the research felt otherwise, how could they justify the trial? Was it ethical to place patients at risk of harm to do research if doctors had reason to suppose that one course of action might be materially better than another?

So equipoise is a state of equilibrium, where a scientist cannot be sure which of the alternatives he or she is contemplating might be true.

In my view, it is related to that famous Popperian sine qua non of science itself: falsifiability. Something is not science if it is not capable of disproof. We cannot even imagine an experiment that would disprove the existence of God—so that is what makes a belief in God religion. When Einstein famously conjectured that matter and energy warp the fabric of space and time itself, experiments to test the claim were not possible, but they were at least imaginable, and the theory was capable of disproof. And, eventually, he was proven right, first based on astronomical observations regarding the orbit of Mercury, and most recently by the magnificent discovery at LIGO of gravitational waves from the collision of two black holes over a billion years ago. Yet, even if he had been wrong, his conjecture would still have been scientific.

If falsifiability solves the “problem of demarcation” that Popper identified between science and non-science, equipoise addresses the problem of origin: Where ought scientists start from? Thinking about where scientists do—and should—start from is often lacking. Too often, we simply begin from where we are.

In some ways, therefore, equipoise is an antecedent condition to falsifiability. It is a state we can be in before we hazard a guess that we might test. It is not quite a state of ignorance, but rather a state of quasi-neutrality, when glimmers of ideas enter our minds.

Scientific equipoise tends to characterize fields both early and late in their course, for different reasons. Early in a field or in a new area of research, it is often true that little is known about anything, so any direction can seem promising, and might actually be productive. An exciting neutrality prevails. Late in the exploration of a field, much is known, and so it might be hard to head towards new things, or the new things, even if true, might be rather small or unimportant. An oppressive neutrality can rule.

My reasons for thinking that this concept ought to be more widely known is that equipoise carries with it aspects of science that remain sorely needed these days. It connotes judgment—for it asks what problems are worthy of consideration. It connotes humility—for we do not know what lies ahead. It connotes open vistas—because it looks out at the unknown. It connotes discovery—because, whatever way forward we choose, we will learn something. And it connotes risk—because it is sometimes dangerous to embark on such a journey. 

Equipoise is a state of hopeful ignorance, the quiet before the storm of discovery.

luca_de_biase's picture
Journalist; Editor, Nova 24, of Il Sole 24 Ore

The way predictions are made is changing. Data scientists are competing with traditional statisticians, and “big data” analysis is competing with the study of "statistic samples." This change mirrors a wider paradigm shift in the conception of society and in what rules its structural dynamics. In order to understand this change, one needs to know the "power law."

If facts happen randomly, in a two-axis world, it is very much possible that they will distribute as in a Gaussian curve, in the shape of a bell, with a majority of happenings concentrating around the average. But if facts are interlinked and if they co-evolve in such a way that a change in one quantity results in a proportional change in the other quantity, it is more probable that they will distribute as in a power law graph, in a ski jump shape, in which the average is not important and polarization is unavoidable. 

The distributions of a large set of phenomena observed in physics, biology, astronomy follow a power law, but this kind of curve became very much discussed when it was applied to the understanding of the Internet. Studying the number of links to specific web pages, it was soon clear that some pages were attracting more links and that, with the growth of the network, it was more and more probable that new pages would link also to those very pages. In such a network, some nodes became hubs and other pages were only destinations: The number of links to pages followed the power law and it was possible to predict that the dynamics of the network was going to bring about a polarization of resources, as in the Barabási-Albert model, an algorithm invented by Albert-László Barabási and Réka Albert. This kind of understanding has consequences. 

As the Internet became more and more important for society, the network theory became part of the very notion of social dynamics. In a network society, the power law is becoming the fundamental pattern.

In social sciences, prediction has often been more a kind of shaping the future than a description of what will actually happen. That sort of shaping by predicting has often relied on the assumptions that were used in the predicting process: Predicting something that will happen in a society relied on an idea of society. When scholars shared the assumptions that were defined in the notion of the "mass society"—with mass production, mass consumption, mass media, in which almost everybody behaves the same, both at work and when consuming—in their vision, fundamental characters were the same and diversity was randomly distributed: so, Gauss ruled. In a mass society most people were average, different people were rare and extreme, thus society was described by a "bell curve," a Gaussian curve, the "normal" curve. Polls based on statistic samples were able to predict behaviors. 

But in a network society the fundamental assumptions are quite different. In a network society, all characters are linked and co-evolve because a change in a character will probably affect other characters. In such a society, the average doesn't predict much and scholars need a different fundamental pattern. 

The power law is such a fundamental pattern. In this kind of society, resources are not random: they co-evolve and they polarize. In finance, as in knowledge, resources are attracted by abundance. The richer get unavoidably richer.

Understanding this pattern is the only way for a network society to oppose inequality without looking for solutions that were good in a mass society. Bernardo Huberman, a network theorist, observed that the winner take-all in a category, that is to say in a meaningful context. For example, the best search engine wins in the search engine category, but not necessarily in the whole of the web, thus not necessarily in the social network category. In such a network, innovation is the most important dynamic to oppose inequality, and real competition is possible only if new categories can emerge. If finance is only one big market, then the winner takes all. If rules make sure that different banks can only play in different categories of financial services, then there is less concentration of resources and less global risk. 

In a mass society, everything tends to go toward the average: The middle class wins in a normal distribution of resources. In a network society, resources are attracted by the hub and differences inevitably grow. The mass society is an idea of the past, but the network society is a challenge for the future.

The power law can help understanding, and maybe correct the dynamics of networks by growing the awareness of its fundamental patterns. Predictions are narratives. And good narratives need some empirical observations. Moore's law is useful to those that share the techno-centered narrative of the exponential growth of computer abilities. The power law is useful to those that want to critically see the evolution of a network.

kevin_kelly's picture
Senior Maverick, Wired; Author, What Technology Wants and The Inevitable

Why do the successful often fail to repeat their success? Because success is often the source of failure.

Success is a form of optimization—a state of optimal profits, or optimal fitness, or optimal mastery. In this state you can’t do any better than you are. In biology, a highly evolved organism might reach a state of supreme adaptation to its environment and competitors to reach reproductive success. A camel has undergone millions of revisions in its design to perfect its compatibility with an arid climate. Or in business, a company may have spent many decades perfecting a device until it was the number one bestselling brand. Say it manufactured and designed a manual typewriter that was difficult to improve. Successful individuals, too, discover a skill they are uniquely fit to master—a punk rock star who sings in an inimitable way.

Scientists use a diagram of a mountainous landscape to illustrate this principle. The contours of the undulating landscape indicate adaptive success of a creature. The higher the elevation of an entity, the more successful it is. The lower, the less fit. The lowest elevation is zero adaptation, or in other words, extinction. The evolutionary history of an organism can thus be mapped over time as its population begins in the foothills of low adaptation and gradually ascends the higher mountains of increased environmental adaptation. This is known in biology and in computer science as “hill climbing.” If the species is lucky, it will climb until it reaches a peak of optimal adaptation. Tyrannosaurus Rex achieved peak fitness. The industrial-age Olivetti Corporation reached the peak of optimal typewriter. Sex Pistols reached the summit of punk rock.

Their stories might have ended there with ongoing success for ages, except for the fact that environments rarely remain stable. In periods of particularly rapid co-evolution, the metaphorical landscape shifts and steep mountains of new opportunities rise overnight. What for a long time seemed a monumental Mt. Everest can quickly be dwarfed by a new neighboring mountain which shoots up many times higher. During one era dinosaurs, typewriters, or punk rock are at the top; in the next turn, mammals, word processors, and hip-hop tower over them. The challenge for the formerly successful entity is to migrate over to the newer, higher peak. Without going extinct.

Picture a world crammed with nearly vertical peaks, separated by deep valleys, rising and falling in response to each other. This oscillating geography is what biologists describe as a “rugged landscape.” It’s a perfect image of today’s churning world. In this description, in order for any entity to move from one peak to a higher one, it must first descend to the valley between them. The higher the two peaks, the deeper the gulf between them. But descent, in our definition, means the entity must reduce its success. Descent to a valley means an organism or organization must first become less fit, less optimal, less excellent before it can rise again. It must lower its mastery and its chance of survival and approach the valley of death.

This is difficult to any species, organization, or individual. But the more successful an entity is, the harder it is to descend. The more fit a butterfly is to its niche, the harder it is for it to devolve away from that fit. The more an organization has trained itself to pursue excellence, the harder it is to pursue non-excellence, to go downhill into chaos. The greater the mastery a musician gains for her distinctive style, the harder it is to let it all go, and perform less well. Each of their successes binds them to their peaks. But as we have seen, sometimes that peak is only locally optimal. The greater global optimal is only a short distance away, but it might as well be forever away, because an entity has to overcome its success by being less successful. It must go down against the gain of its core ability, which is going uphill towards betterment. When your world rewards hill climbing, going downhill is almost impossible.

Computer science has borrowed the concept of hill climbing as a way of discovering optimal solutions to complex problems. This technique uses populations of algorithms to explore a wide space of possible solutions. The possibilities are mapped as a rugged landscape of mountains (better solutions) and valleys (worse). As long as the next answer lands a little “higher uphill” toward a better answer than the one before, the system will eventually climb to the peak, and thus find the best solution. But as in biology, it is likely to converge onto a local “false” summit, rather than the higher global optimal solution. Scientists have invented many tricks to shake off the premature optimization in order to get it to migrate to the globally optimal. Getting off a local peak and arriving at the very best, repeatedly, demands patience and surrender to imperfection, inefficiency, and disorder.

Over the long haul, the greatest source of failure is prior success. So, whenever you are pursuing optimization of any type, you want to put into place methods that prevent you from premature optimization on a local peak: Let go at the top.


irene_pepperberg's picture
Research Associate & Lecturer, Harvard; Author, Alex & Me

The term “cognitive ethology” was coined and used by Donald Griffin in the late 1970s- early 1980s to describe a field that he was among the first to champion—the study of versatile thinking by nonhumans and, importantly, how the data obtained could be used to examine the evolution and origins of human cognition. His further emphasis on the study of animal consciousness, however, caused many of his colleagues to shun all his ideas, proverbially “throwing out the baby with the bath water.” Griffin’s term, “cognitive ethology,” nevertheless deserves a closer look and a renaissance in influence. The case is strengthened by an historical examination of the subject.

In the 1970s and 1980s, researchers studying nonhuman abilities were slowly moving away from the two ideologies that had dominated psychology and much of behavioral biology for decades—respectively, behaviorism and fixed-action patterns—but progress was slow. Proponents of behaviorism may have argued that little difference existed between the responses of humans and nonhumans to external stimuli, but attempted to explain all responses to such stimuli in terms of their shaping by reward and punishment, avoiding any discussion of mental representations, manipulation of information, intentionality or the like. Students of biology were taught that animals were basically creatures of instinct that, when exposed to particular stimuli or contexts, engaged in species-specific invariant sequences of actions that would, once initiated, run to completion even if environmental changes occurred; these patterns were thought to be controlled by hard-wired neural mechanisms and, interestingly, also avoided inclusion of reference to any sort of information processing, mental representation, etc.

Then came two major paradigm shifts, one in psychology and one in biology, but not much understanding by scientists of their common underlying themes. One shift, the so-called “cognitive revolution” in psychology, with its emphasis on all the issues ignored by the behaviorists, was initially conceived as relevant only for the study of human behavior. Nevertheless, far-sighted researchers such as Hulse, Fowler, and Honig saw how the human experiments could be adapted to study similar processes in nonhumans. The other shift, the advent of long-term observational studies of groups of nonhumans in nature by researchers (the most familiar being Jane Goodall) who were collecting extensive examples of versatile behavior, showed that nonhumans reacted to unpredictable circumstances in their environment in ways often suggesting human-like intelligence. Cognitive ethology was meant to be a synthesis of what were at the time seen as innovative psychological and biological approaches.

As noted above, disaffection with Griffin’s arguments about animal consciousness unfortunately prevented the term—and the field—of cognitive ethology from taking hold; as a consequence, the likelihood of interdisciplinary research has not been as great as its promise. Psychologists often remain in the lab and prefer to describe their research as “comparative.” The term suggests an openness toward looking at a variety of species and testing for similarities and differences in behavior on numerous tasks, which require, at the least, advanced levels of learning. Unfortunately, such studies usually occur under conditions far removed from natural circumstances. Furthermore, relatively few of their studies actually examine the cognitive processes underlying the exhibited behavior patterns. Similarly, cognitive biologists (a term more common in Europe than in the United States) tend to be reductionist, more likely comparing the neuroanatomy and neurophysiology of various species and, even when comparing behavior patterns in the field, often simply argue for either homology or analogy when similarities are found or merely highlight any observed differences. The connection is not always made between neuro similarities and differences and how these are expressed in specific types of behavior. Studies from these areas come close to the goals of cognitive ethology, but generally (although not exclusively) still involve less emphasis on examining cognitive processes with respect to the whole animal than does “cognitive ethology.”

For example: Psychologists may test whether a songbird in a laboratory can distinguish between the vocalizations of a bird in a neighboring territory versus that of a stranger and determine what bits of song are relevant for that discrimination. Lab biologists may determine which bits of brain are responsible for these discriminations. Field biologists may collect information testing whether the size of the repertoire of a bird of a given species correlates with the quality of its territory or its reproductive success.

A cognitive ethologist, however, will not only be interested in these data (generally obtained via some kind of collaboration), but will also examine how and why a bird chooses to learn a particular song or set of songs, why it chooses to sing a particular song from its entire repertoire to defend its territory against a neighbor versus a stranger, how that choice varies with the environmental context (e.g., the distance from the intruder, the type of foliage separating them, the song being sung by the intruder), how other males respond to the interaction, and how the females in the area may make their choice of mate based on the outcome of such male-male interactions. It is this type of inclusive research that provides real knowledge of the use of the song system. 

I thus argue that the time has come to focus on the advantages of looking at nonhumans through the lens of cognitive ethology. Cognitive ethology should again be considered a means of bringing new views and methodologies to bear on the study of animal behavior and of encouraging collaborative projects. Whether the topic is communication, numerical competence, inferential or probabilistic reasoning, or any of a number of other possibilities, studies using an approach based on cognitive ethology will provide both a deeper and broader understanding of the data. Furthermore, a renewed interest in nonhuman cognition and intelligence, and how such intelligence is used in the daily life of nonhumans, will provide exciting evolutionary insights, as Griffin had proposed: By examining and comparing mental capacities of large numbers of species, we can surmise much about the origins of human abilities.

juan_enriquez's picture
Managing Director, Excel Venture Management; Co-author (with Steve Gullans), Evolving Ourselves

On a brisk May day, in 1967, Tilly Edinger crossed a peaceful, leafy Cambridge, Massachusetts street for the last time. Ironic, given that she survived challenge after challenge. In the 1920s, after becoming a paleontologist against her father’s, and the profession’s, wishes, Edinger and a few others began systematically measuring the fossilized heads of various animals and human ancestors. The idea was to understand the evolution of the cranial cavity and attempt to infer changes in brain anatomy, thus birthing paleoneurology. Then she lost everything, fleeing Frankfurt just after Kristallnacht. As Hitler wiped out most of her relatives, she painstakingly rebuilt her life in the US. Although a lifelong friend and correspondent of Einstein, she led a somewhat reclusive existence, and, as occurred with Rosalind Franklin, she was somewhat underestimated and unappreciated. On May 6th she left Harvard’s Museum of Comparative Zoology and, having lost most of her hearing in her teenage years, she never heard the car coming. Thus ends the first chapter of the field of paleoneurology.

Once the initial, crude skull-measuring methods, such as sand and water displacement, were established, and after a few fossils were measured, there was not a whole lot of rapid progress. Few scholars bet their careers on the new field, ever fewer graduate students signed up. Progress was fitful. Gradually, measurements improved. Liquid latex, Dentsply, and plasticine gave way to exquisite 3D computer scans. But the field remained largely a data desert. One leading scholar, Ralph Holloway, estimated the entire global collection of hominid measurable skulls numbered around 160, about “one brain endocast for every 235,000 years of evolutionary time.”  Lack of data eventually led to a civil war over measurement methods and conclusions between two of the core leaders of paleoneurology, Professors Falk and Holloway.

Nevertheless, the fundamental question paleoneurology seeks to address, “How do brains change over time?” goes straight to the core of why we are human. Now, as various technologies develop, we may be able get a whole lot savvier about how brains changed. Ancient DNA and full genomes are beginning to fill in some gaps. Some even claim you can use genomes to predict faces. Perhaps soon we could get better at partially predicting brain development just from sequence data. And, alongside new instruments, big data emerging from comparative neurology and developmental neurology experiments provide many opportunities to hypothesize answers to some of the most basic lagoons in paleoneurology. Someday we may even revive a Neanderthal and be able to answer why, given their bigger brains and likely at least comparable intelligence, they did not survive.  

There is a second, more fundamental reason why paleoneurology might become a common term; Brains used to change slowly. That is no longer the case. The comparative study of brains over time becomes ever more relevant as we place incredible evolutionary pressures on the most malleable of our organs.

How we live, eat, absorb information, and die are all radically different: a daytime hunter-gatherer species became a mostly dispersed, settled, agricultural species. And then, in a single century, we became a majority urban species. Studying rapid changes in brains animal and human gives us a benchmark to understand how changes occurred over thousands of years, then hundreds of years, and even over the past few decades.

Paleoneurology should retool itself to focus on changes occurring in far shorter timespans, on the rapid rewiring that can result in explosions of autism, on impacts of drastic changes in diet, size, and weight. We need a historic context for the evolution that occurs as our core brain inputs shift from observing nature to reading pages and then digital screens. We have to understand what happens when brains that evolved around contemplation, observation, boredom, interrupted by sudden violence, are now bombarded from every direction as our phones, computers, tablets, TVs, tickers, ads, and masses of humans demand an immediate assessment and response. We are de facto outsourcing and melding parts of our memories with external devices, like our PDAs.

What remains a somewhat sleepy, slow-moving field should take up the challenge of understanding enormous change in short periods of time. It is possible that a 1950s brain, for better and worse, might look Jurassic when compared, on a wiring and chemical level, to a current brain. Same might be true for animals, like the once shy pigeons that migrated from farms into cities and became aggressive pests.

Given that one in five Americans is now taking a mind-altering drug(s), the experiment continues and accelerates. Never mind fields like optogenetics, which alter and reconnect our brains, thoughts, memories, fears using light stimuli projected inside the brain. Eventual implants will radically change brain design, inputs, and outputs. Having a baseline, a good paleoneurological history of brain design, for ourselves and many other basic species, may teach us a lot about what our brains were, where they came from, but even more important, what they are becoming. I bet this is a challenge Tilly Edinger would have relished. 

sabine_hossenfelder's picture
Theoretical Physicist; Research Fellow, Frankfurt Institute for Advanced Studies; Author, Existential Physics

Lawns in public places all suffer from the same problem: People don’t like detours. Throughout the world we search for the fastest route, the closest parking spot, the shortest way to the restroom—we optimize. Incremental modification, followed by evaluation and readjustment, guides us to solutions that maximize a desired criterion. These little series of trial and error are so ingrained we rarely think about them. But optimization, once expressed in scientific terms, is one of the most versatile scientific concepts we know.

Optimization under variation isn’t only a human strategy but a ubiquitous selection principle that you can observe everywhere in nature. A soap bubble minimizes surface area. Lightning uses the way of least resistance. Light travels the path which takes the least amount of time. And with only two slight twists, optimization can be applied even more widely.

The first twist is that many natural systems don’t actually perform the modifications—they work by what physicists call “virtual variations.” This means that from all the possible ways a system could behave, the one we observe is quantifiably optimal: It minimizes a function called the “action.” Using the mathematical procedure known as “principle of least action,” we can then obtain equations that allow us to calculate how the system will behave.

Accommodating quantum mechanics requires a second twist. A quantum system doesn’t only do one thing at a time; it does everything that’s possible, all at the same time. But properly weighted and summed up in the “path integral,” this collection of all possible behaviors again describes observations. And usually the optimal behavior is still the most likely one, which is why we don’t notice quantum effects in everyday life.

Optimization is not a new concept. It’s the scientific variant of Leibnitz’s precocious hypothesis that we live in the “best of all possible worlds.” But while the idea dates back to the 18th century, it is still the most universal law of nature we know. Modern cosmology and particle physics both work by just specifying exactly in which way our world is “the best.” (Though I have to warn you that “just specifying exactly” requires a whole lot of mathematics.)

And if physics isn’t your thing, optimization also underlies natural selection and free market economies. Our social, political, and economic systems are examples of complex adaptive systems; they are collections of agents who make incremental modifications and react to feedback. By this, the agents optimize adaptation to certain criteria. Unlike in physics, we can’t calculate what these systems will do, and that’s not what we want—we just use them as tools to work to our ends. Exactly what each system optimizes is encoded in the feedback cycle. And there’s the rub.

It’s easy to take the optimization done by adaptive systems for granted. We’re so used to this happening, it seems almost unavoidable. But how well such systems work depends crucially on the setup of the feedback cycle. Modifications should neither be too random nor too directed, and the feedback must—implicitly or explicitly—evaluate the optimization criteria.

When we use adaptive systems to facilitate our living together, we therefore have to make sure they actually optimize what they’re supposed to. An economic system pervaded by monopolies, for example, doesn’t optimize supply to customers' demands. And a political system which gives agents insufficient information about their current situation and which does not allow them to extrapolate likely consequences of their actions doesn’t optimize the realization of its agents’ values.

Science, too, is an adaptive system. It should optimize knowledge discovery. But science, too, doesn’t miraculously self-optimize what we hope it does—the implementation of the feedback cycle requires careful design and monitoring. It’s a lesson that even scientists haven’t taken sufficiently to heart: If you get something from nothing, it’s most likely not what you wanted.

When we use optimization to organize our societies, we have to decide what we mean by “optimal.” There’s no invisible hand to take this responsibility off us. And that ought to be more widely known.

barbara_tversky's picture
Professor Emerita of Psychology, Stanford University; Professor of Psychology and Education, Columbia Teachers College; Author, Mind in Motion: How Action Shapes Thought

Many have chosen the cosmic, appropriately in these heady times of gravity waves and Einstein anniversaries. The secrets of the universe. But how did Einstein arrive at his cosmic revelations? Through his body, imagining being hurled into space at cosmic speed. Not through the equations that proved his theories nor through the words that explain them.

Imagining bodies moving in space. This is the very foundation of science, from the cosmic, bright stars and black holes and cold planets, to the tiny and tinier reverberating particles inside particles inside particles. The foundation of the arts, figures swirling or erect on a canvas, dancers leaping or motionless on a stage, musical notes ascending and descending, staccato or adagio. The foundation of sports and wars and games.

And the foundation of us. We are bodies moving in space. You approach a circle of friends, the circle widens to embrace you. I smile or wince and you feel my joy or my pain, perhaps smiling or wincing with me. Our most noble aspirations and emotions, and our most base, crave embodiment, actions of bodies in space, close or distant. Love, from which spring poetry and sacrifice, yearns to be close and to intertwine, lovers, mothers suckling infants, roughhousing, handshakes, and hugs.

That foundation, bodies moving in space, in the mind or on the earth, seeks symbolic expression in the world: rings and trophies, maps and sketches and words on pages, architectural models and musical scores, chess boards and game plans, objects that can be touched and treasured, scrutinized and transformed, stirring new thinking and new thoughts.

When I was seven we moved from the city to the country. There were stars then, half a hollow indigo sphere of sparkling stars encompassing me, everyone; the entire universe right before my eyes. Exhilarating. The speck that was me could be firmly located in that cosmos. In the return address on letters to my grandfather, I wrote my name, my house number, my street, my town, my state, my country, my continent, Planet Earth, the Milky Way, the Universe. A visible palpable route linking the body, my own body, to the cosmic. 

azra_raza_md's picture
Chan Soon-Shiong Professor of Medicine, Columbia University Medical Center; Author, The First Cell

One in 2 men and 1 in 3 women in the US will get cancer. Five decades after declaring war on the disease, we are still muddling our way rather blindly from the slash-poison-burn (surgery-chemo-radiation) strategies to newer approaches like targeted therapies, nanotechnology, and immunotherapies which benefit only a handful of patients. Among other reasons for the slow progress, a major flaw is the study of cancer cells in isolation, which de-links the seed from its soil.

Stephen Paget was the first to propose in 1889 that “seeds” (cancer cells) preferentially grew in the hospitable “soil” (microenvironment) of select organs. The cross-talk between seed-soil hypothesized by Paget indeed proved to be the case whenever the question was examined (such as in the elegant studies of Hart and Fiddler in the 1980s). Yet, consistent research combining studies of the seed and soil were not pursued, largely because in the excitement generated by the molecular revolution and discovery of oncogenes, the idea of creating animal models of human cancers appeared far more appealing. This led to the entire field of cancer research being hijacked by investigators studying animal models, xenografts and tissue culture cell lines in patently artificial conditions. The result of this de-contextualized approach, which is akin to looking for car keys under the lamppost because of the light instead of where they were lost a mile away, is nothing short of a tragedy for our cancer patients whose pain and suffering some of us witness and try to alleviate on a daily basis.

Many of my fellow researchers are probably rushing to attack me for making misleading statements and ignoring the great advances they have already accomplished in oncology using the very systems I am criticizing. I should know; I am still receiving hate mail for answering the Edge 2014 Question about what idea is ready for retirement by saying that mouse models as surrogates for developing cancer therapeutics need to go. I am sorry to remind them that we have failed to improve the outcome for the vast majority of our cancer patients. The point is that if strategies we have been using are not working, it is time to let them go. Or at least stop pretending that these mutated, contrived systems have anything to do with malignant diseases in humans. Both the funding agencies and leaders of the oncology field need to admit that the paradigms of the last several decades are not working. 

The concept I want to promote is that of Paget’s seed-and-soil approach to cancer and urge a serious examination of cancer cells as they exist in their natural habitats. Basic researchers want to know what they should replace their synthetic models with. My answer is that first and foremost, they should work directly with the clinical oncologists. If methods to recapitulate human cancers in vitro don’t exist, then we must prepare to study them directly in vivo. We have a number of effective drugs but these usually help only subsets of patients. It would be a tremendous step forward if we can match the right drug to the right patient.

For example, in the study of leukemia, we could start by treating patients with a study drug while simultaneously studying freshly obtained pre- and post-therapy blood/bone marrow samples with pan-omics (genomics, proteomics, metabolomics, transcriptomics) technologies. A proportion of patients will respond and a proportion will fail. Compare the pan-omics results of the two groups and then design subsequent studies to enrich for subjects predicted to respond. It is likely that a few more patients will respond in round two. Repeat all the studies in successive clinical trials until identifiable reasons for response and non-response are determined.

If this exercise is undertaken for each drug that has shown efficacy, within the foreseeable future, we will not only have accurate ways of identifying which patients should be treated, we will be able to protect the patients unlikely to respond from receiving non-effective but toxic therapies. Besides, the pan-omics results are likely to identify novel targets for more precise drug development. In this strategy, each successive trial design would be informed by the previous one on the basis of data obtained from cancer cells as they existed in their natural soil.

Readers are probably wondering why such obvious studies based on patient samples are not being done already. Sadly, there is little incentive for basic researchers to change, not only because of the precious nature of human tissue and the difficulties of working with harassed, over-worked clinical oncologists (mice are easier to control) but also because of resources. I am aghast at funding agencies like the NIH who continue to prefer funding grants that use an animal model or a cell line. After all, who makes the decisions at these agencies? As Gugu Mona, the South African writer and poet has noted: “The right vision to a wrong person is like the right seed to wrong soil.” 

jon_kleinberg's picture
Tisch University Professor of Computer Science, Cornell University

Three people stand in front of a portrait in a museum, each making a copy of it: an art student producing a replica in paint; a professional photographer taking a picture of it with an old film camera; and a tourist snapping a photo with a phone. Which one is not like the others? 

The art student is devoting much more time to the task, but there's a sense in which the tourist with the phone is the odd one out. Paint on canvas, like an exposed piece of film, is a purely physical representation: a chemical bloom on a receptive medium. There is no representation distinct from this physical embodiment. In contrast, the cell phone camera's representation of the picture is fundamentally numerical. To a first approximation, the phone's camera divides its field of view into a grid of tiny cells, and stores a set of numbers to record the intensity of the colors in each of the cells that it sees. These numbers are the representation; they are what get transmitted (in a compressed form) when the picture is sent to friends or posted online. 

The phone has produced a digital representation—a recording of an object using a finite set of symbols, endowed with meaning by a process for encoding and decoding the symbols. The technological world has embraced digital representations for almost every imaginable purpose—to record images, sounds, the measurements of sensors, the internal states of mechanical devices—and it has done so because digital representations offer two enormous advantages over physical ones. First, digital representations are transferrable: After the initial loss of fidelity in converting a physical scene to a list of numbers, this numerical version can be stored and transmitted with no further loss, forever. A physical image on canvas or film, in contrast, degrades at least a little essentially every time it's reproduced or even handled, creating an inexorable erosion of the information. Second, digital representations are manipulable: With an image represented by numbers, you can brighten it, sharpen it, or add visual effects to it simply using arithmetic on the numbers. 

Digital representations have been catalyzed by computers, but they are fundamentally about the symbols, not the technology that records them, and they were with us long before any of our current electronic devices. Musical notation, for example—the decision made centuries ago to encode compositions using a discrete set of notes—is a brilliant choice of digital representation, encoded manually with pen and paper. And it conferred the benefits we still expect today from going digital. Musical notation is transferrable: A piece by Mozart can be conveyed from one generation to the next with limited subjective disagreement over which pitches were intended. And musical notation is manipulable: We can transpose a piece of music, or analyze its harmonies using the principles of music theory, by working symbolically on the notes, without ever picking up an instrument to perform it. To be sure, the full experience of a piece of music isn't rendered digitally on the page; we don't know exactly what a Mozart sonata sounded like when originally performed by its composer. But the core is preserved in a way that would have been essentially impossible without the representation by an alphabet of discrete symbols. 

Other activities, like sports, can also be divided on a digital-or-not axis. Baseball is particularly easy to follow on the radio because the action has a digital representation: a coded set of symbols that conveys the situation on the field. If you follow baseball, and you hear that the score is tied 3-to-3 in the bottom of the ninth inning, with one out, a 3-and-2 count, and a runner on second, you can feel the tension in the representation itself. It's a representation that's transferrable—it can communicate a finely resolved picture of what happened in a game to people far away from it in space or time—and it's manipulable—we can evaluate the advisability of various coaching decisions from the pure description alone. For comparison, sports like hockey and soccer lack a similarly expressive digital representation; you can happily listen to them on the radio, but you won't be able to reconstruct the action on the field with anything approaching the same fidelity. 

And it goes beyond any human construction; complex digital representations predate us by at least a billion years. With the discovery that a cell's protein content is encoded using three-letter words written in an alphabet of four genetic bases, the field of biology stumbled upon an ancient digital representation of remarkable sophistication and power. And we can check the design criteria: it's transferrable, since you need only have an accurate symbol-by-symbol copying mechanism in order to pass your protein content to your offspring; and it's manipulable, since evolution can operate directly on the symbols in the genome, rather than on the molecules they encode. 

We've reached a point now in the world where the thoughtful design of digital representations is becoming increasingly critical; they are the substrates on which large software systems and Internet platforms operate, and the outcomes we get will depend on the care we take in the construction. The algorithms powering these systems do not just encode pictures, videos, and text; they encode each of us as well. When one of these algorithms recommends a product, delivers a message, or makes a judgment, it's interacting not with you but with a digital representation of you. And so it becomes a central challenge for all of us to think deeply about what such a representation reflects, and what it leaves out. Because it's what the algorithm sees, or thinks it sees: a transferrable, manipulable copy of you, roaming across an ever-widening landscape of digital representations. 

george_dyson's picture
Science Historian; Author, Analogia

“The internal motion of water assumes one or other of two broadly distinguishable forms,” Osborne Reynolds reported to the Royal Society in 1883. “Either the elements of the fluid follow one another along lines of motion which lead in the most direct manner to their destination, or they eddy about in sinuous paths the most indirect possible.” 

Reynolds puzzled over how to define the point at which a moving fluid (or a fluid with an object moving through it) makes the transition from stable to unstable flow. He noted that the transition depends on the ratio between inertial forces (characterized by mass, distance, and velocity) and viscous forces (characterized by the “stickiness” of the fluid). When this ratio, now termed the Reynolds number, reaches a certain critical value, the fluid shifts from orderly, deterministic behavior to disorderly, probabilistic behavior resistant to description in full detail. The two regimes are known as laminar and turbulent flow. 

The Reynolds number is both non-dimensional and universal, appearing consistently across a range of phenomena as diverse as blood pumping through a heart, a fish swimming through the sea, a missile flying through the air, burning gas flowing through a jet turbine, or weather systems flowing around the Earth. It is both descriptive in the sense of capturing the characteristics of an existing flow and predictive in the sense that the Reynolds number gives a reliable indication of which regime will dominate a projected flow. Thanks to the Reynolds number, we can tackle otherwise intractable problems in fluid dynamics with scale models in a scaled flow and make predictions that hold up.

The notion of a Reynolds number and critical velocity can also be applied to non-traditional domains: for instance, the flow of information in a computer network, where velocity is represented by bandwidth and viscosity is represented by processing speed of the individual nodes, or the flow of money in an economy, where velocity is represented by how fast funds are moving and viscosity is represented by transaction costs. 

Wherever things (including ideas) are either moving through a surrounding medium or flowing past a boundary of any kind, the notion of a Reynolds number can help characterize the relation between inertial forces and viscosity, giving a sense for what may happen next.

Why do things go smoothly for a while, and then suddenly they don’t?

 

seth_shostak_7's picture
Senior Astronomer, SETI Institute; Author, Confessions of an Alien Hunter

It’s a familiar peeve: The public doesn’t understand science or its workings. Society would be stronger and safer if the citizenry could only judge the reliability of climate change studies, the benefits of vaccines, or even the significance of the Higgs boson. 

This plaint sounds both worthy and familiar, but to lament the impoverished state of science literacy is to flog an expired equine. It’s easy to do, but neither novel nor helpful.

Of course, that’s not to say that we shouldn’t try. The teaching and popularization of science are The Lord’s work. But one can easily allow the perfect to become the enemy of the good. Rather than hope for a future in which everyone has a basic understanding of atomic theory or can evaluate the statistical significance of polls, I’m willing to aspire to a more conditional victory.

I would appreciate a populace able to make order-of-magnitude estimates.

This is a superbly useful skill, and one that can be acquired by young people with no more than a bit of practice. Learning the valences of the elements or the functions of cellular organelles, both topics in high school science, require memorization. Estimating the approximate magnitude of things does not.

And mirabile dictu, no personal electronics are required. Indeed, gadgets are a hindrance. Ask a young person “how much does the Earth weigh?” and he’ll pull out his phone and look it up. The number will be just that—a number—arrived at without effort or the slightest insight.

But this is a number that can be approximated in one’s head with no more than middle school geometry, a sense of the approximate weight of a rock or a car, and about a minute’s thought.

The rough-and-ready answer might be wrong by a factor of two or three, but in most cases that will be adequate for whatever is the purpose at hand. 

To scientists, such questions are known as Fermi problems, after the famous physicist who encouraged colleagues to make back-of-the-envelope calculations. Apparently, one such problem posed by Enrico Fermi himself—and reputedly used by Google when interviewing potential employees—is “how many piano tuners are there in Chicago?” 

Answering this requires making reasonable guesses about such things as the fraction of households having pianos, how long it takes to tune them, etc. But anyone can do that. No background in advanced mathematics is required, just the self-confidence to take on the problem.

If we wish the public to be able to make smart choices about such issues as the relative dangers of football versus driving, how long will it take to burn away the entire Amazon basin, or whether it’s safer to inoculate your child or not, we are fabulizing if we think that hearing the answers in a news report will suffice. Just as skills are developed by practice, not by reading, so too will an ability (and a readiness) to make an order-of-magnitude estimate produce understanding that is deep and long-lasting.

Aside from its utility, this skill rewards its practitioners with personal gratification. It’s an everyday demonstration that quantitative knowledge about the world is not simply handed down from on-high. Observation, simplifying assumptions (“let’s approximate a chicken by a sphere…”), and the simplest calculation can get us close to the truth. It is not solely the province of experts with tweed jackets and a wall covered with sheepskins.

School teachers have long tried to promote an interest in science by maintaining that curiosity and logical thinking are characteristics of us all. “Everyone’s a scientist.” But this felicitous idea is generally followed up with course curricula that are warehouses of facts. Being able to make order-of-magnitude computations with nothing more than one’s wits would be far more satisfying, because then you would know, not because someone else told you, but because you worked it out yourself.

We have inadvertently let a device in everyone’s back pocket become the back of the book for any question requiring a quantitative answer. It needn’t be so, and it shouldn’t.

john_c_mather's picture
Nobel Prize in Physics; Senior Astrophysicist, Observational Cosmology Laboratory, NASA's Goddard Space Flight Center; Coauthor (with John Boslough), The Very First Light

The name “Big Bang” has been misleading scientists, philosophers, and the general public since Sir Fred Hoyle said it on radio in 1949. It conjures up the image of a giant firecracker, an ordinary explosion happening at a place and a time, a collection of material suddenly beginning to expand into the surrounding empty space. But this is so exactly opposite to what astronomers have observed that it is shocking we still use the name, and it is not the least bit surprising that some people object to it. Einstein didn’t like it at first but became convinced. Hoyle never liked it at all. People might like it better if they knew what it meant.

What astronomers actually have observed is that distant galaxies all appear to be receding from us, with a speed roughly proportional to their distance. We’ve known this since 1929, when Edwin Hubble drew his famous plot. From this we conclude a few simple things. First, we can get the approximate age of the universe by dividing the distance by the speed; the current value is around 14 billion years. The second and more striking conclusion is that there is no center of this expansion, even though we seem to be at the center. We can imagine what an astronomer would see living in another distant galaxy, and she would also conclude that the universe appears to be receding from her own location. The upshot is that there is no sign of a center of the universe. So much for the “giant firecracker.” A third conclusion is that there is no sign of an edge of the universe, no place where we run out of either matter or space. This is what the ancient Greeks recognized as infinite, unbounded, without limits. This is also the exact opposite of a giant firecracker, for which there is a moving boundary between the space filled with debris, and the space outside that. The actual universe appears to be infinite now, and if so it has probably always been infinite. It’s often said that the whole universe we can now observe was once compressed into a volume the size of a golf ball, but we should imagine that the golf ball is only a tiny piece of a universe that was infinite even then. The unending infinite universe is expanding into itself.

There’s another way in which the giant firecracker idea misleads us, because even scientists often talk about the “universe springing into existence.” Well, it didn’t, as far as we can tell. The opposite is true. There is no first moment of time, just as there is no smallest positive number. In physics we have equations and laws of nature that describe how one situation changes into another, but we have no equations that show how true nothingness turns into somethingness. So, since the universe did not spring into existence, it has always existed, though perhaps not in its current form. That’s true, even though the apparent age of the universe is not infinite, but only very large. And, even though there’s no first moment of time, we can still measure the age.

There’s still plenty of mystery to go around. What are those equations of the early universe which might describe what came before the atoms we see today? We’re pretty confident that we can imagine times in the early universe when temperatures and pressures were so high that atoms would have been ripped apart into the particles we have manufactured at our Large Hadron Collider. We have a Standard Model of cosmology and we have a Standard Model of particles. But the mysteries include: Why is there an asymmetry between matter and antimatter, such that the whole observable universe is made of matter? What is dark matter? What is dark energy? What came before the expansion and made it happen, if anything did? We’ve got the idea of cosmic inflation, which might be right. What are space and time themselves? Einstein’s general relativity tells us how they are curved but scientists suspect that this is not the whole story, because of quantum mechanics and especially quantum entanglement.

Stay tuned! There are some more Nobel prizes to be earned.

jim_holt's picture
Author and Essayist, New York Times. New Yorker, Slate; Author, Why Does the World Exist?

Science is supposed to be about an objective world. Yet our observations are inherently subjective, made from a particular frame of reference, point of view. How do we get beyond this subjectivity to see the world as it truly is?    

Through the idea of invariance. To have a chance of being objective, our theory of the world must at least be intersubjectively valid: It must take the same form for all observers, regardless of where they happen to be located, or when they happen to be doing their experiment, or how they happen to be moving, or how their lab happens to be oriented in space (or whether they are male or female, or Martian or Earthling, or…). Only by comparing observations from all possible perspectives can we distinguish what is real from what is mere appearance or projection.    

Invariance is an idea of enormous power. In mathematics, it gives rise to the beauties of group theory and Galois theory, since the shifts in perspective that leave something invariant form an algebraic structure known as a "group."  

In physics, as Emmy Noether showed us with her beautiful theorem, invariance turns out to entail the conservation of energy and other bedrock conservation principles—"a fact," noted Richard Feynman, "that most physicists still find somewhat staggering."    

And in the mind of Albert Einstein, the idea of invariance led first to e = mc2, and then to the geometrization of gravity.   

So why aren't we hearing constantly about Einstein's theory of invariance? Well, "invariant theory" is what he later said he wished he had called it. And that's what it should have been called, since invariance is its very essence. The speed of light, the laws of physics are the same for all observers. They're objective, absolute—invariant. Simultaneity is relative, unreal.  

But no. Einstein had to go and talk about the "principle of relativity." So "relativity"—and not its opposite, "invariance"—is what his revolutionary theory ended up getting labeled. Einstein's "greatest blunder" was not (as he believed) the cosmological constant after all. Rather, it was a blunder of branding—one that has confused the public for over a century now and empowered a rum lot of moral relativists and lit-crit Nietzscheans.   

Thanks, Einstein.

chiara_marletto's picture
Junior Research Fellow at Wolfson College and Postdoctoral Research Associate at the Materials Department, University of Oxford; Author, The Science of Can and Can't

The concept of "impossible" underlies all fundamental theories of physics; yet its exact meaning is little known. The impossibility of cloning, or copying, certain sets of states is at the heart of quantum theory and of quantum information. The impossibility of exceeding the speed of light is a fundamental feature of the geometry of spacetime in relativity. The impossibility of constructing perpetual motion machines is the core idea of thermodynamics: No machine can exist that produces energy without consuming any; and the second law demands the impossibility of machines converting "heat" (such as energy stored in the sea at room temperature) completely into "work" (energy that is useful, in that it can be used, for instance, to power a mill).

But what do we mean, exactly, by impossible?

The concept of impossible in physics is deep and has beautiful implications; it sharpens the everyday meaning of the word "impossible," giving to that airy nothing a solid, firm, deep connotation, rooted in the laws of physics. That something is impossible means that the laws of physics set a fundamental, draconian limit to how accurately it can be brought about.

For instance, one can construct real machines approximating a perpetual motion machine to some degree, but there is a fundamental limit to how well that can be done. This is because, since energy is conserved overall, the energy supplied by any such machine must come from somewhere else in the universe; and since there are no infinite sources of energy, perpetual motion is impossible: All finite sources eventually run out.

That something is impossible is deeply different from its not happening at all under the particular laws of motion and initial conditions of our universe. For example, it may be that, under the actual laws and initial conditions, an ice statue of the pirate Barbarossa will never arise in the whole history of our universe, and yet that statue need not be impossible. Unlike for perpetual motion machines, it might still be possible to create arbitrarily accurate approximate copies of such a statue under different initial conditions. The impossibilities I mentioned above, instead, are categorical: A perpetual motion machine cannot be brought about to arbitrarily high accuracy, under any laws of motion and initial conditions.

The exact physical meaning of the word "impossible" is illuminating also because it provides a deeper understanding of what is possible.

That something is possible means that the laws of physics set no limit to how well it can be approximated. For example, thermodynamics says that a heat engine is possible: We can come up with better and better ways of approaching the ideal behavior of the ideal engine, with higher and higher efficiencies, with no limitation to how well that can be achieved. Each realization will have a different design; they will employ different, ever improving technologies; and there is no limit to how much any given real heat engine can be improved upon.

So, once we know what is impossible under the laws of physics, we are left with plenty of room for our ideas to try and create approximations to things that are possible. This opening up of possibilities is the wonderful, unexpected implication of contemplating the fundamental physical meaning of "impossible." May it be as widely known as it is physically possible. 

daniel_rockmore's picture
Professor of Mathematics, William H. Neukom 1964 Distinguished Professor of Computational Science, Director of the Neukom Institute for Computational Science, Dartmouth College

The history of science is littered with “thought experiments,” a term dreamed up by Albert Einstein (“gedankenexperiment”) for an imagined scenario able to sharply articulate the crux of some intellectual puzzle, and in so doing excite some deep thinking on the way to a solution or related discovery. Among the most famous are Einstein’s tale of chasing a light beam that led him to a theory of special relativity and Erwin Schrödinger’s story of the poor cat, stuck in a fiendishly designed quantum mechanical box, forever half-alive and half-dead, that highlighted the complex interactions between wave mechanics and measurement.

“The Trolley Problem” is another thought experiment, one that arose in moral philosophy. There are many versions, but here is one: A trolley is rolling down the tracks and reaches a branchpoint. To the left, one person is trapped on the tracks, and to the right, five people. You can throw a switch that diverts the trolley from the track with the five to the track with the one. Do you? The trolley can’t brake. What if we know more about the people on the tracks? Maybe the one is a child and the five are elderly? Maybe the one is a parent and the others are single? How do all these different scenarios change things? What matters? What are you valuing and why?

It’s an interesting thought experiment, but these days it’s more than that. As we increasingly offload our decisions to machines and the software that manages them, developers and engineers increasingly will be confronted with having to encode—and thus directly code—important and, potentially, life and death decision making into machines. Decision making always comes with a value system, a “utility function,” whereby we do one thing or another because one pathway reflects a greater value for the outcome than the other. Sometimes the value might seem obvious or trivial—this blender is recommended to you over that one for the probability that you will purchase it, based on various historical data; these pair of shoes are a more likely purchase (or perhaps not the most likely, but worth a shot because they are kind of expensive—this gets us to probabilistic calculations and expected returns) than another. This song versus that song, etc.

But sometimes there is more at stake: this news or that news? More generally, this piece of information or that piece of information on a given subject? The values embedded in the program may start shaping your values and with that, society’s. Those are some pretty high stakes. The trolley problem shows us that the value systems that pervade programming can literally be a matter of life and death: Soon we will have driverless trolleys, driverless cars, and driverless trucks. Shit happens and choices need to be made: the teenager on the bike in the breakdown lane or the Fortune 500 CEO and his assistant in the stopped car ahead? What does your algorithm do and why?

We will build driverless cars and they will come with a moral compass—literally. The same will be true of our robot companions. They’ll have values and will necessarily be moral machines and ethical automata, whose morals and ethics are engineered by us. “The Trolley Problem” is a gedankenexperiment for our age, shining a bright light on the complexities of engineering our new world of humans and machines. 

eric_topol's picture
Professor of Genomics, The Scripps Translational Science Institute; Author, The Patient Will See You Now

The leading killer diseases—heart and cancer—don’t follow old dogma.

The two leading causes of death are heart attacks and cancer. For much too long we’ve had the wrong concept about the natural history of these conditions.

There are common threads for these two diseases: People generally don’t die of cholesterol buildup in their arteries (atherosclerosis) unless they have a heart attack or stroke; similarly, cancer rarely causes death unless it metastasizes.

For decades, it was believed that cholesterol build up inside an artery supplying the heart muscle followed a slow, progressive development. As the plaque grew bigger and bigger, as the theory goes, it eventually would clog up the artery and cause interruption of blood supply—a heart attack. That turned out not to be true at all, since it’s the minor cholesterol narrowings that are, by far, the most common precursors to a heart attack. The heart attack results from a blood clot as the body tries to seal a sudden crack in the wall of the artery. The crack is an outgrowth of inflammation. It doesn’t cause any symptoms until a blood clot forms and heart muscle is starved for oxygen. That’s why people who have heart attacks often have no warning symptoms or can have a perfect exercise stress test but keel over days later.

There’s a second concept regarding heart attacks that should be more widely known. The media often mislabel a sudden death or a heart rhythm problem as a “heart attack.” That’s wrong. A heart attack is defined by loss of blood supply to the heart. If that leads to a chaotic heart rhythm it can result in death. But most people with heart attacks have chest pain and other symptoms. Separately, a person can have an electrical heart rhythm event without anything to do with atherosclerosis and that can cause death. That is not a heart attack.

With a better understanding of what is a heart attack and its basis, more recently, cancer is following suit. The longstanding dogma was that cancer slowly progresses over years until it starts to spread throughout the body. But now it has been shown that metastasis can indeed occur with early lesions, defying the linear model of cancer’s natural history. We knew that mammography picks up early breast cancer, for example, but that has not had a meaningful impact on saving lives. So much for the simplistic notion for how cancer develops, perhaps one of the reasons it has been so difficult to treat.

With the global health burden so largely explained by these two killer diseases, it is vital we raise awareness and reboot our preventive strategies in the future.  

michael_gazzaniga's picture
Neuroscientist, UC Santa Barbara; Author, The Consciousness Instinct

For all our scientific efforts, there remains a gap, the Schnitt. The gap between the quantum and the classical, between the living and the non-living, between the mind and the brain. How on earth is science going to close this gap, what the physicist Werner Heisenberg called the Schnitt, and what the theoretical biologist Howard Pattee calls the “epistemic cut”? Some maintain the gaps only reflect a current failure of knowledge. Others think the gaps will never be closed, that they are in fact un-closeable.

Pattee, who has been working at the problem for fifty years, believes he has a handle on it. I think he does too. The clue to grasping his idea goes all the way back to understanding the difference between non-living and living systems. To understand this difference, biologists need to fully embrace the gift from modern physics, the idea of complementarity.

Accepting this unexpected gift is not easy. Einstein himself wouldn’t accept it until Niels Bohr forced him to. The discovery of the quantum world meant the classical world of physics had a new partner that had to be considered when explaining stuff. Suddenly, the world of reversible time, the notion of dogmatic determinism, and the aspiration to a grand theory of the universe were on the rocks. Bohr’s idea, the principle of complementarity, maintains that quantum objects have complementary (paired) properties, only one of which can be exhibited and measured and, thus, known, at a given point in time. That’s a big blow to a scientist, and to physicists it is perhaps the most devastating. As Robert Rosen pointed out, “Physics strives, at least, to restrict itself to "objectivities." It thus presumes a rigid separation between what is objective, and falls directly within its precincts, and what is not.… Some believe that whatever is outside is so because of removable and impermanent technical issues of formulation…. Others believe the separation is absolute and irrevocable.”

This is where Howard Pattee picks up the story. Pattee argues that complementarity demands that life is to be seen as a layered system in which each layer has, and indeed demands, its own vocabulary. On one side of the Schnitt, there is the firing of neurons. On the other, there are symbols, the representations of the physical that also have a physical reality. Only one side of the Schnitt can be evaluated at a time, though both are real and physical and tangible. Here we have Bohr’s complementarity on a larger scale, two modes of mutually exclusive description making up a single system from the get-go.

There is no spook in the system introduced here, and Pattee calls upon the venerable mechanisms of DNA to make his point. DNA is a primeval example of symbolic information (the DNA code) controlling material function (the action of the enzymes), just as John von Neumann had predicted must exist for evolving, self-reproducing automatons. However, it is also the old chicken and egg problem, with fancier terms: Without enzymes to break apart the DNA strands, DNA is simply an inert message that cannot be replicated, transcribed or translated. Yet, without DNA, there would be no enzymes!

Staring us right in the face is a phenomenal idea. A hunk of molecules, which have been shaped by natural selection, makes matter reproducible. These molecules, which can be stored and recalled, are a symbol, a code for information that describes how to build a new unit of life. Yet those same molecules also physically constrain the building process. DNA is both a talker and a doer, an erudite outdoorsman. There are two realities to this thing. Just like light is a wave and a particle at the same time. Information and construction, structure and function, are irreducible properties of the same physical object that exist in different layers with different protocols.

While I find this a tricky idea on the one hand, it is utterly simple and elegant on the other. Pattee has given us a schema and a way to think about how, using nothing but physics (including the principle of complementarity), life comes out of non-living stuff. The schema, way up the evolutionary scale, also accounts for how the subjective mind can emerge from objective neurons. Pattee suggests that instead of approaching conscious cognition as either information processing or neural dynamics, there is a third approach available. Consciousness is not reducible to one or the other. Both should be kept on the table. They are complementary properties of the same system. I say hats off to Pattee. We brain scientists have our work cut out for us.

roger_highfield's picture
Director, External Affairs, Science Museum Group; Co-author (with Martin Nowak), SuperCooperators

You might be forgiven for thinking that this is so blindingly obvious that it is hardly worth stating, let alone arguing, that it should become a popular meme. After all, "pre-" means “before,” so surely you should be able to take action in the wake of a prediction to change your future—like buying an umbrella when a deluge is forecast, for example.

Weather forecasting is indeed a good example of an actionable prediction, a beautiful marriage of real-time data from satellites and other sensors with modeling. But when you shift your gaze away from the physical sciences towards medicine, these predictions are harder to discern.

We are a long way from doctors being able to make routine and reliable actionable predictions about individual patients—which treatments will help them the most, which drugs will cause them the fewest side effects, and so on.

The answers to many simple questions remain frustratingly elusive. Should I take an antibiotic for that sore throat? Will immunotherapy work for me? Which of that vast list of possible drug side effects should I take seriously?  What is the best diet for me? If only we could predict the answers for a particular patient as reliably as meteorologists predict tomorrow’s weather.

Today there is much talk of Big Data providing all the answers. In biology, for example, data from the human genome project once kindled widespread hope that if we sequenced a patient’s DNA we would get a vivid glimpse of their destiny.

Despite the proliferation of genomes, epigenomes, proteomes, and transcriptomes, that crystal ball looks cloudier than at first thought, and the original dream of personalized medicine in genomics has been downgraded to precision medicine, where we assume that a given person will respond in a similar way to a previously-studied group of genetically similar people.

Blind gathering of Big Data in biology continues apace, however, emphasizing transformational technologies such as machine learning—artificial neural networks, for instance—to find meaningful patterns in all the data. But no matter their "depth" and sophistication, neural nets merely fit curves to the available data. They may be capable of interpolation, but extrapolation beyond their training domain can be fraught.

The quantity of data is not the whole story either. We are gathering a lot, but are we gathering the right data and of sufficient quality? Can we discern a significant signal in a thicket of false correlations? Given that bodies are dynamic and ever-changing, can data snapshots really capture the full complexities of life?

To make true actionable predictions in medicine, we also need a step change in mathematical modeling in biology, which is relatively primitive compared to physics and for entirely understandable reasons: Cells are hugely complicated, let alone organs and bodies.    

We need to promote interest in complex systems so we can truly predict the future of an individual patient, rather than infer what might be in store for them from earlier population studies of who responded to a new treatment, who did not, and who suffered serious side effects. We need deeper insights, not least to end making diagnoses post mortem, and to prevent tens of thousands of people perishing at the hands of doctors every year through iatrogenic effects. 

Ultimately we need better modeling based on mechanistic understanding in medicine so that, one day, your doctor can carry out timely experiments on a digital Doppelgänger before she experiments on you. Modern medicine needs more actionable predictions.

gerald_smallberg's picture
Practicing Neurologist, New York City; Playwright, Off-Off Broadway Productions, Charter Members; The Gold Ring

This concept comes from epidemiology, the field of medicine comprising methods used to find the causes of health outcomes and diseases in populations. The meaning and relevance of this term require some historical background. 

When I was a medical student almost fifty years ago, I learned this concept from Dr. Alvan Feinstein, a professor of both medicine and epidemiology at Yale.  He taught a course in clinical diagnosis that would prepare us for seeing patients in the hospital. A strict and exacting teacher, he demanded that we learn to take a detailed and carefully crafted patient history, combined with a meticulous physical examination that were both crucial, he believed, to the art and science of medicine. Dr. Feinstein, as a cardiologist, had helped delineate the criteria that defined rheumatic heart disease. The critical role that rheumatic fever—caused by Streptoccocal infection—played in its pathogenesis could then be firmly established, so that early treatment of this infection became the standard of care to prevent this debilitating heart condition.

Feinstein's clinical research inspired his life-long study of the natural history of disease with the hope this would lead to better diagnosis and therapy. At the time he was my teacher, his research was focused on the epidemiology of lung cancer.  He was trying to analyze the optimal use of cancer screening studies, which was then and still is an issue that bedevils medicine.

Feinstein believed in making sure that the medical record contained the best data possible so that when it was reviewed retrospectively, unknown variables that were not appreciated when they were first obtained could be used to better classify patients into appropriate categories for ongoing and future clinical studies. He stressed to us that medical students had the best chance of recording this vital information in their case history as we were the most inclusive and least biased insofar as which data should be included. Every other patient note composed at each rung of the medical ladder, from the intern to the attending physician, was progressively streamlined and abbreviated to reflect the impression and conclusions that had already been formed. He read our reports fastidiously and underlined in red what he liked or didn't like, in the rigorous manner befitting his role as a teacher, journal editor, and clinical researcher. Along with the standard chief complaint, the patient's answer to the question, "What is bothering you, or why are you here?" he also wanted to know why the patient had decided to see a doctor at that particular time. It was the latter response that led Feinstein to coin the term iatrotropic stimulus, a phrase that combined the Greek iatros, or physician, with trope, “to turn.” In other words: What led the patient to seek help that day as opposed to another time when he may have been experiencing the same complaints? Perhaps the chronic cough that had been ignored was now associated with a fleck of blood or had become of greater concern because a friend or an acquaintance had just been diagnosed with cancer. In Feinstein’s view, this question would unleash information that could provide not only further epidemiological insights, but also would be invaluable in better understanding the fears, concerns, and motivation that drove the patient to seek medical care. Although well grounded in science and having studied mathematics before becoming a physician, he believed "clinical judgment depended not on a knowledge of causes, mechanisms, or names for diseases, but on a knowledge of patients." 

The iatrotropic stimulus never found its rightful place in the medical literature. After his course, I never used this term in any of my subsequent reports. Yet its clinical importance forever left its mark on me. It formed the back story, or mise-en-scène of my interaction, whenever possible, with my patients. As we get more and more data indelibly inscribed in an electronic record derived from encoded questionnaires, algorithm generated inputs and outputs that yield problem lists and diagnoses in our striving for more evidence based decision making, the human being can get lost in the fog of information.  In the end, however, it is the relationship of the patient and the treating physician that is still the most important. Together they must deal with complexity and uncertainty, the perfect Petrie dish for incubating fear and anxiety that, despite all technological progress, will remain the lot of the human condition. 

There are always problems to be addressed that are not limited to medical conditions but that happen to us as members of the larger society we inhabit. As we each try to confront these problems, the iatrotropic stimulus is an important concept to know—the "why now?" is a question for us to continually keep in mind.

lisa_randall's picture
Physicist, Harvard University; Author, Dark Matter and the Dinosaurs

People can disagree about many deep and fundamental questions, but we are all pretty confident that when we sit on a hard wooden chair it will support us, and that when we take a breath on the surface of the Earth we will take in the oxygen we need to survive.

Yet, ultimately that chair is made of molecules, which are made of atoms composed of nuclei—protons and neutrons—with electrons orbiting them with probability functions in agreement with quantum mechanical calculations. Those electrons are on average very far from the nuclei, meaning from a perspective of matter, they are mostly empty space. And those protons and neutrons are made of quarks bound together by the dynamics of the strong force.

No one knew about quarks until the second half of the 20th century. And despite the wisdom of the ancient Greeks, no one really knew about atoms either until at best a couple of hundred years ago. And of course air contains oxygen molecules and many others, too. Yet people had no trouble breathing air before this was known.

How is that possible? We all work in terms of effective theories. We find descriptions that match what we actually see, interact with, and measure. The fact that a more fundamental description can underlie what we observe is pretty much irrelevant until we have access to any effects that differentiate that description. A solid entity made from wood is a pretty good description of a chair when we go about our daily lives. It’s only when I want to know more about its fundamental nature that we bother to change our description. Really it’s only when we have the technological tools to study those questions that we can test whether our ideas about its underlying nature are correct.

Effective theory is a valuable concept when we ask how scientific theories advance, and what we mean when we say something is right or wrong. Newton’s laws work extremely well. They are sufficient to devise the path by which we can send a satellite to the far reaches of the Solar System and to construct a bridge that won’t collapse. Yet we know quantum mechanics and relativity are the deeper underlying theories. Newton’s laws are approximations that work at relatively low speeds and for large macroscopic objects. What’s more is that an effective theory tells us precisely its limitations—the conditions and values of parameters for which the theory breaks down. The laws of the effective theory succeed until we reach its limitations when these assumptions are no longer true or our measurements or requirements become increasingly precise.

This notion of effective theory extends beyond the realm of science. It’s really how we approach the world in all its aspects. We can’t possibly keep track of all information simultaneously. We focus on what is accessible and relate those quantities. We use a map that has the scale we need. It’s pointless to know all the small streets around you when you’re barreling down a highway.

This notion is practical and valuable. But we should be wary since it also makes us miss things in the world—and in science. What’s obvious is what’s in our effective theory. What lies beyond might be the more fundamental truth. Sometimes it’s only a little prodding that takes us to a richer, more inclusive understanding. Getting outside our comfort zone is how science and ideas advance and what ultimately yields a richer understanding of the world.

thomas_a_bass's picture
Professor of English and Journalism, State University of New York, Albany; Author of The Eudaemonic Pie and The Spy Who Loved US

Our modern world of digitized bits moving with ever-increasing density and speed through a skein of channels resembling an electronic nervous system is built on information. The theory of information was borne full-blown from the head of Claude Shannon in a seminal paper published in 1948. Shannon provided the means—but not the meaning—for this remarkable feat of engineering. Now, as we are coming to realize with increasing urgency, we have to put the meaning back in the message.

Information theory has given us big electronic pipes, data compression, and wonderful applications for distinguishing signal from noise. Internet traffic is ballooning into the realm of zettabytes—250 billion DVDs-worth of data—but the theory underlying these advances provides no way to get from information to knowledge. Awash in propaganda, conspiracy theories, and other signs of information sickness, we are giving way to the urge to exit from modernity itself. What is it about information that is making us sick? Its saturation and virulence? Its vertiginous speed? Its inability to distinguish fact from fiction? Its embrace of novelty, celebrity, distraction?

Information theory, as defined by Shannon in his paper on “A Mathematical Theory of Communication” (which was republished in book form the following year as “The Mathematical Theory of Communication), deals with getting signals transmitted from information sources to receivers. “The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point,” wrote Shannon in the second paragraph of his paper. “Frequently the messages have meaning, that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.”

Shannon’s theory has proved remarkably fruitful for signal processing and other aspects of a modern world bathed in bits (Shannon was the first to use this word in print), but he had nothing to say about messages that “frequently ... have meaning.” As Marshall McLuhan said of Shannon, “Without an understanding of causality, there can be no theory of communication. What passes as information theory today is not communication at all, but merely transportation.”

Take, for example, the comedian George Carlin’s Hippy Dippy Weatherman who announces, “Tonight’s forecast, dark. Continued dark tonight. Turning to partly light in the morning.” Since this message conveys information already known to us in advance, the amount of information it carries, according to Shannon, is zero. But according to McLuhan—and anyone who has watched George Carlin pace the stage as he delivered Al Sleet’s weather report—the message contains a raft of information. The audience guffaws at mediated pomposity, the unreliability of prediction, and the dark future, which, if we survive it, might possibly turn partly light by morning.

Information theory has not budged since Shannon conceived it in 1948, but the pathologies surrounding information have begun to metastasize. We are overwhelmed by increasing flows of information, while our capacity for understanding this information remains as primitive as ever. Instead of interpreting this information, teasing knowledge from data, we are shrugging our shoulders and saying, “I dunno. It’s a wash. You have a lot of information on your side. I have a lot on my side. (Whether it’s verified information or disinformation or lies—who cares? There’s a lot it.) So let’s raise our arms into a big cosmic shrug.”

Addressing the “common angst” of our age, Jared Bilby, co-chair of the International Center for Information Ethics, describes “the fallout of information pathologies following information saturation, dissolution, and overload.” The theory for understanding the causes of information sickness is being put together by people like Luciano Floridi, a professor in the philosophy and ethics of information at Oxford. According to Floridi, we are in the process of transforming ourselves into “informational organisms (inforgs), who share with other kinds of agents a global environment, ultimately made of information, the infosphere….” In this global environment of information—which is related, of course, to the other environment made of stuff—the task is to find the meaning in the message. Tonight’s forecast is dark, to be sure, but we are hoping for signs of light by morning. 

c_sar_hidalgo's picture
Associate Professor, MIT Media Lab; Author, Why Information Grows

In physics we say a system is in a critical state when it is ripe for a phase transition. Consider water turning into ice, or a cloud that is pregnant with rain. Both of these are examples of physical systems in a critical state. 

The dynamics of criticality, however, are not very intuitive. Consider the abruptness of freezing water. For an outside observer, there is no difference between cold water and water that is just about to freeze. This is because water that is just about to freeze is still liquid. Yet, microscopically, cold water and water that is about to freeze are not the same.

When close to freezing, water is populated by gazillions of tiny ice crystals, crystals that are so small that water remains liquid. But this is water in a critical state, a state in which any additional freezing will result in these crystals touching each other, generating the solid mesh we know as ice. Yet, the ice crystals that formed during the transition are infinitesimal. They are just the last straw. So, freezing cannot be considered the result of these last crystals. They only represent the instability needed to trigger the transition; the real cause of the transition is the criticality of the state.

But why should anyone outside statistical physics care about criticality? 

The reason is that history is full of individual narratives that maybe should be interpreted in terms of critical phenomena. 

Did Rosa Parks start the civil rights movement? Or was the movement already running in the minds of those who had been promised equality and were instead handed discrimination? Was the collapse of Lehman Brothers an essential trigger for the Great Recession? Or was the financial system so critical that any disturbance could have made the trick?

As humans, we love individual narratives. We evolved to learn from stories and communicate almost exclusively in terms of them. But as Richard Feynman said repeatedly: The imagination of nature is often larger than that of man. So, maybe our obsession with individual narratives is nothing but a reflection of our limited imagination. Going forward we need to remember that systems often make individuals irrelevant. Just like none of your cells can claim to control your body, society also works in systemic ways.

So, the next time the house of cards collapses, remember to focus on why we were building a house of cards in the first place, instead of focusing on whether the last card was the queen of diamonds or a two of clubs. 

robert_provine's picture
Professor Emeritus, University of Maryland, Baltimore County; Author, Curious Behavior: Yawning, Laughing, Hiccupping, and Beyond

A morphogenetic field is a region of an embryo that forms a discrete structure, such as a limb or heart. Morphogenetic fields became known through the experimental work of Ross G. Harrison, one of the most deserving scientists never to have won a Nobel Prize. The regions are described as fields instead of discrete cells because they can recover from the effects of partial destruction. For example, if half of a salamander's forelimb field is destroyed, it will still develop into a reasonable approximation of a complete limb, not a half-limb. If the limb field is transplanted to a novel region, such as the mid-flank of a host embryo, it will develop into an extra limb. These remarkable and still valid discoveries were widely reported in the scientific and popular media during the Golden Age of experimental embryology in the first half of the 20th century, but have been partially eclipsed by the emergence of more modern, reductionistic approaches to developmental problems.

The morphogenetic field offers important lessons about the nature of development and genetic determination. A morphogenetic field has the property of self-organization, forming the best possible whole from available cells. The field is a cellular ecosystem that will not work if the fates of component cells are predetermined entities that lack the requisite plasticity. The cellular community of a field is coordinated by a chemical gradient and, therefore, is not scalable, which is the reason why all embryos are small and about the same size, whether mouse or great blue whale. Self-organization of a morphogenetic field brings the benefit of error correction, a tremendous advantage to a complex, developing system, where a lot can and does go wrong. For example, if a cell in the field is missing, another will be reprogrammed to take its place, or if an errant cell wanders into the field, its developmental program will be overridden by its neighbors, in both cases forming the best possible whole.

The probabilistic, epigenetic processes of morphogenetic fields force a reconsideration of what it means to be "genetically determined" and illustrate why genes are better understood as recipes than blueprints; genes provide instructions for assembly, not a detailed plan for the final product. For psychologists and other social scientists whose developmental studies are based more on philosophical than biological foundations, morphogenetic fields provide a good starting point for learning how development works. Embryos provide excellent instruction about development for those knowing where to look and how to see. 

sheizaf_rafaeli's picture
Professor, Director, The Center for Internet Research, University of Haifa, Israel

Biological, human, and organizational realities are networked. Complex environments are networks. Computers are networked. Epidemics are networks. Business relations are networked. Thought and reasoning are in neural networks. Emotions are networked. Families are networks. Politics are networked. Culture and social relations are, too. Your network is your net worth.

Yet the general public does not yet "speak networks."

Network concepts are still new to many, and not widely enough spread. I have in mind structural traits like peer-to-peer and packet switching, process qualities such as assortativity, directionality and reciprocity, indicators such as in- and out-degree, density, centrality, betweenness, multiplexity and reciprocity, and ideas such as bridging vs. bonding, Simmelian cliques, network effects and strength of ties. These amount to a language and analytical approach whose time has come.

Much of scientific thought in the last century, especially in the social and life sciences, has been organized around notions of central tendency and variance. These statistical lenses magnified and clarified much of the world beyond earlier, pre-positivist and less evidence-based approaches. However, these same terms miss and mask the network. It is now time to open minds to an understanding of the somewhat more complex truths of networked existence. We need to see more networks in public coverage of science, more in media reporting, more in writing and rhetoric, even more in the teaching of expression and composition. "Network speaking" beckons more post-linear language.

Some of the best minds of the early 21st century are working on developing a language that is not yet known, integrated or spoken outside their own small circle, or network. It is this language of metaphors and analytical lenses, which focuses on networks, that I propose be shared more widely now that we are beginning to see its universal value in describing, predicting, and even prescribing reality.

In an era of fascination with big data it is too easy to be dazzled by the entities ("vertices") and their counts and measures, at the expense of the links ("edges"). Network ideas bring the connections back to the fore. Even if these are hidden from or in plain view they are the essence. Statistics, and especially variance-based measures such as standard deviation and correlation analyses are reductionist. Network lenses allow and even encourage a much needed pulling back to see the broader picture. In all fields, we need more topology and metrics that recognize the mesh of connections beyond the traits of the components.

As with literacy, recent generations witnessed an enormous leap in numeracy. More people know numbers, are comfortable with calculations, and can see the relevance of arithmetic and even higher math to their daily lives. Easy access to calculators, followed by widespread access to computation devices have accelerated  the public's familiarity and comfort with numbers as a way of capturing reality, predicting and dealing with it.

We do not yet have the network equivalent of the pocket calculator. Let's make that a next improvement in public awareness of science. While a few decades ago it was clear the public needed to be taught about mean, media and mode, standard deviations and variance, percentages and significance, it is now the turn of network concepts to come to the fore. For us to understand the spread of truth and lies, political stances and viruses, wealth and social compassion, we need to internalize the mechanisms and measures of the networks along which such dynamics take place. The opposite of networking is not working.

read_montague's picture
Neuroscientist; Director, Human Neuroimaging Lab and Computational Psychiatry Unit, Virginia Tech Carilion Research Institute; Author, Why Choose This Book?: How We Make Decisions

Recursion resides at the core of all intelligence. Recursion requires the ability to reference an algorithm or procedure, and keep the reference distinct from the contents. And this capacity to refer is a centerpiece of the way that organisms form models of the world around them and even of themselves. Recursion is a profoundly computational idea and lies at the heart of Godel’s incompleteness theorem and the philosophical consequences that flow from that work. Turing’s own work on computable numbers requires recursion at its center. It’s an open question about how recursion is implemented biologically, but one could speculate that it has been discovered by evolution many times.

frank_tipler's picture
Professor of Mathematical Physics, Tulane University; Coauthor (with John Barrow), The Anthropic Cosmological Principle

In 1957, a Princeton physics graduate student named Hugh Everett showed that the consistency of quantum mechanics required the existence of an infinity of universes parallel to our universe. That is, there has to be a person identical to you reading this identical article right now in a universe identical to ours. Further, there have to be an infinite number of universes, and thus an infinite number of people identical to you in them.

Most physicists, at least most physicists who apply quantum mechanics to cosmology, accept Everett’s argument. So obvious is Everett’s proof for the existence of these parallel universes, that Steve Hawking once told me that he considered the existence of these parallel universes “trivially true.” Everett’s insight is the greatest expansion of reality since Copernicus showed us that our star was just one of many. Yet few people have even heard of the parallel universes, or thought about the philosophical and ethical implications of their existence. Kepler and Galileo emphasized that the Copernican Revolution implied that humans and their planet Earth are important in the cosmos, rather than being merely the “dump heap of the filth and dregs of the universe,” to use Galileo’s description of our standing in the Ptolemaic universe.

I’ll mention only two implications of the existence of parallel universes which should be of general interest: the implications for the free will debate, and the implications for answering the question of why there is evil in the world.

The free will question arises because the equations of physics are deterministic. Everything that you do today was determined by the initial state of all the universes at the beginning of time. But the equations of quantum mechanics say that although the future behavior of all the universes are determined exactly, it is also determined that in the various universes, the identical yous will make different choices at each instant, and thus the universes will differentiate over time. Say you are in an ice cream shop, trying to choose between vanilla and strawberry. What is determined is that in one world you will choose vanilla and in another you will choose strawberry. But before the two yous make the choice, you two are exactly identical. The laws of physics assert it makes no sense to say which one of you will choose vanilla and which strawberry. So before the choice is made, which universe you will be in after the choice is unknowable in the sense that it is meaningless to ask.

To me, this analysis shows that we indeed have free will, even though the evolution of the universe is totally deterministic. Even if you think my analysis has been too facile—entire books can and have been written on the free will problem—nevertheless, my simple analysis shows that these books are themselves too facile, because they never consider the implications of the existence of the parallel universes for the free will question.

Another philosophical problem with ethical implications is the Problem of Evil: Why is there evil in the universe we see? We can imagine a universe in which we experienced nothing bad, so why is this evil-free universe not the universe we actually see? The German philosopher Gottfried Leibniz argued that we are actually in the best of all possible worlds, but this seems unlikely. If Hitler had never taken power in Germany, there would have been no Holocaust. Is it plausible that a universe with Hitler is better than a universe without him? The medieval philosopher Abelard claimed that existence was a good in itself, so that in order to maximize the good in reality, all universes, both those with evil in them and those without evil, have to be actualized. Remarkably, quantum mechanics says that the maximization of good as Abelard suggested is in fact realized.

Is this the solution of the Problem of Evil? I do know that many wonder “why Hitler?” but no analysis considers the fact that—if quantum mechanics is correct—there is a universe out there in which he remained a house painter. No analysis of why evil exists can be considered reasonable unless it takes into account the existence of the parallel universes of quantum mechanics.

Everyone should know about the parallel universes of quantum mechanics!

emanuel_derman's picture
Professor, Financial Engineering, Columbia University; Author, Models.Behaving.Badly

Imagine you are very nearsighted, so that you can see only the region locally near you, but can observe nothing globally, far away. 

Now imagine you are situated at the top of a mountain, and you want to make your way down by foot to the lowest point in the valley. But you cannot see beyond your feet, so your algorithm is simply to keep heading downhill, wherever gravity takes you fastest. You do that, and eventually, half way down the mountain, you end up at the bottom of a small oval ditch or basin, from which all paths lead up. As far as you can nearsightedly tell, you’ve reached the lowest point. But it’s a local minimum, not truly the bottom of the valley, and you’re not where you need to be. 

What would be a better algorithm for someone with purely local vision? Simulated annealing, inspired by sword makers.

To make a metal sword, you have to first heat the metal until it’s hot and soft enough to shape or mold. The trouble is that when the metal subsequently cools and crystallizes, it doesn’t do so uniformly. As a result, different parts of it tend to crystallize in different orientations (each of them a small local low-energy basin), so that the entire body of the sword consists of a multitude of small crystal cells with defects between them. This makes the sword brittle. It would be much better if the metal were one giant crystal with no defects. That would be the true low-energy configuration. How to get there?

Sword makers learned how to anneal the metal, a process in which they first heat it to a high temperature and then cool it down very slowly. Throughout the slow cooling, the sword maker continually taps the metal, imparting enough energy to the individual cells so that they can jump up from their temporary basin into a higher energy state and then drop down to realign with their neighbors into a communally more stable, lower energy state. The tap paradoxically increases the energy of the cell, moving it up and out of the basin and farther away from the true valley, which is ostensibly bad; but in so doing the tap allows the cell to emerge from the basin and seek out the lower energy state. 

As the metal cools, as the tapping continues, more and more of the cells align, and because the metal is cooler, the tapping is less able to disturb cells from their newer and more stable positions.

In physics and mathematics one often has to find the lowest energy or minimum state of some complicated function of many variables. For very complicated functions, that can’t be done analytically via a formula; it requires an algorithmic search. An algorithm that blindly tried to head downwards could get stuck in a local minimum and never find the global minimum. 

Simulated annealing is a metaphorical kind of annealing, carried out in the algorithmic search for the minima of such complicated functions. When the descent in the simulated annealing algorithm takes you to some minimum, you sometimes take a chance and shake things up at random, in the hope that by sometimes shaking yourself out of the local minimum and temporarily moving higher, (which is not where you ultimately want to be), you may then find your way to a lower and more stable global minimum. 

As time passes, the algorithm decreases the probability of the shake-up, which corresponds to the cooling of the metal.

Simulated annealing employs judicious volatility in the hope that it will be beneficial. In an impossibly complex world, we should perhaps shun temporary stability and instead be willing to tolerate a bit of volatility in order to find a greater stability thereafter.

daniel_l_everett's picture
Linguistic Researcher; Dean of Arts and Sciences, Bentley University; Author, How Language Began

The course followed by humans on the path to language was a progression through natural signs to human symbols. Signs and symbols are explained in reference to a theory of "semiotics," the study of signs, in the writings of Charles Sanders Peirce (1839-1914, usually known simply as C.S. Peirce). Peirce was perhaps the most brilliant American philosopher who ever lived. Bertrand Russell said of him, "Beyond doubt ... he was one of the most original minds of the later nineteenth century, and certainly the greatest American thinker ever."

He contributed to mathematics, to science, to the study of language, and to philosophy. He is the founder of multiple fields of study, including semiotics, the study of signs, and pragmatism, the only uniquely American school of philosophy, further developed by William James and others.

Peirce's theory of semiotics outlines a conceptual progression of signs from indexes, to icons, to human-created symbols. This progression moves to an increasing complexity of types of signs and the evolutionary progression of Homo species' language abilities. A sign is any pairing of a form (such as a word, a smell, a sound, a street sign, or Morse code) with a meaning (what the sign refers to). An index, according to Peirce, as the most primitive part of the progression, is a form that has a physical link to what it refers to. The footprint of a cat refers us, makes us think of, a cat. The smell of a grilling steak brings to mind the steak and the grill. Smoke indicates fire. An icon is something that is physically somehow like what it refers to. A sculpture represents the real-life object it is about. A portrait likewise is an icon of whatever it was painted of. An onomatopoeic word like "bam" or "clang" bears an iconic sound resemblance to another sound.

It turns out that Peirce's theory also predicts the order of language evolution we discover in the fossil record. First, we discover indexes being used by all creatures, far predating the emergence of the genus Homo. Second, we discover the use of icons by Australopithecines in South Africa some 3 million years ago. And finally, through recent archaeology on their sea voyages, settlement, and burial patterns, we discover that 1.9 million years ago the first Homo, erectus, had and used symbols, almost certainly indicating that human language—the ability to communicate most anything that we can communicate today in principle—began far before our species appeared. What is most fascinating is that Peirce's semiotics is a theory of philosophy that inadvertently makes startlingly accurate predictions about the fossil record.

The influence of Peirce's semiotics throughout world philosophy, influencing figures such as Ferdinand de Saussure, among others, extends to industry, science, linguistics, anthropology, philosophy, and beyond. Peirce introduced the concept of "infinite semiosis" long before Chomsky raised the issue of recursion as central to language.

Perhaps only Peirce, in the history of inquiry into human language, has come up with a theory that at once predicts the order of language evolution from the earliest hominins to Homo sapiens, while enlightening researchers from across the intellectual spectrum about the nature of truth, of meaning, and the conduct of scientific inquiry.

Peirce himself was a cantankerous curmudgeon. For that reason he never enjoyed stable employment, living in part off of the donations of friends, such as William James. But his semiotics has brought intellectual delight and employment to hundreds of academics since the late 19th century, when he first proposed his theory. His work on semiotics is worthy of being much more widely-known as relevant to current debates. It is far more than a quaint relic of 19th century reflection.

jeremy_bernstein's picture
Professor Emeritus, Stevens Institute of Technology; Former Staff Writer, The New Yorker

Many people have heard of Hawking radiation, which is a form of radiation emitted by a black hole. Less familiar is Unruh radiation, named after Bill Unruh who first described it. It also is emitted by black holes. Close to a black hole, the radiation is predominantly Unruh; further away, it is predominantly Hawking. Unruh radiation is observed by a detector when it is placed in a state of uniform acceleration, whereas if the same detector is at rest or in a state of uniform motion, no radiation is observed. The Unruh radiation in the case of uniform acceleration is like a black body with a temperature proportional to the acceleration. The relevance to black holes is that, close to a black hole, the geometry of a spherical black hole can be transformed so that it looks like that of a uniformly accelerated object. John Bell suggested that Unruh radiation might be observed in an electron storage ring.

hans_halvorson's picture
Professor of Philosophy, Princeton University

The concept of matter ought to be more widely known.

You might wonder, did I misunderstand the question? Did I think it was asking, "What scientific term or concept is already widely known?" For if there is any scientific concept that is widely known, then it's the concept of matter, i.e. that stuff from which all things are made.

But no, I didn't misread the question. While every intelligent person has heard of the concept of "matter," few people know the scientific meaning of the word. What we have here is an example of a concept that was first used in ordinary life, but that has come to be explicated in the development of science.

So what does science tell us about matter?

As you probably know, there's an age-old debate about whether things are ultimately made out of particles, or whether things are excitations or waves in some continuous medium. (In fact, the philosopher Immanuel Kant found this debate so tedious that he declared it irresolvable in principle.) But many of us were told that the wave-particle debate was solved by quantum physics, which says that matter has both particle-like and wave-like aspects.

Then things got a bit weird when Niels Bohr said, "there is no quantum reality," and when Eugene Wigner said, "there is no reality without an observer." What the hell is going on here? Has matter disappeared from physics? Has physics really told us that mind-independent matter doesn't exist?

Thank goodness for the renegades of physics. In the 1960s, people like John Bell and David Bohm and Hugh Everett said: "We don't buy the story being told by Bohr, Wigner, and their ilk. In fact, we find this talk of 'observer-created reality' to be confusing and misleading." These physicists then went on to argue that there is a quantum reality, and that it exists whether or not anyone is there to see it.

But what is this quantum reality like? It's here that we have to stretch our imagination to the breaking point. It's here where we have to let science expand our horizon far beyond what our eyes and ears can teach us.

To a first approximation, what really exists, at the very bottom, is quantum wavefunctions. But we must be careful not to confuse an assertion of mathematical existence with an assertion of physical existence. In the first instance, a quantum wavefunction is a mathematical object—a function that takes numbers as inputs, and spits out numbers as outputs.

Thus, to speak accurately, we ought to say that a quantum wavefunction represents matter, not that it is matter. But how does it accomplish that? In other words, what are the things that exist, and what properties are being attributed to them? It's at this point that things become a bit unclear, or a bit scholastic. There are so many questions we could ask about what it means to say that wavefunctions exist. But what's the use—because quantum mechanics isn't really true.

We now know that quantum mechanics is not literally true—at least not if Einstein's relativity theory is true. In the middle of the 20th century, physicists saw that if you combine relativity with quantum mechanics, then wavefunctions cannot be localized—the result is that, strictly speaking, there aren't any localized material objects. But what there are, they said, are quantum fields—these nebulous quantum entities that spread themselves throughout all of space.

But don't get too excited about quantum fields, because they have their own problems. It was already suspected in the 1960s that quantum fields aren't quantum reality in itself—rather, they're a sort of observer-dependent description of that reality in the same way that saying, "that car is moving at 45 miles per hour" is an observer-dependent description of reality. In fact, it was proven by the German physicist Hans-Jürgen Borchers that many distinct and incompatible quantum field descriptions correspond to any one situation. A similar result has recently been demonstrated by the Michigan philosopher David Baker. The upshot is that you've got to take quantum fields with a grain of salt—they're a human contrivance that gives just one perspective on reality itself.

In summary, particles, in the traditional sense of the word, do not exist. Nor do quantum wavefunctions really exist. Nor do fields exist, neither in the traditional sense of the word, nor in the quantum-theoretic sense of the word.

These facts can seem depressing. It seems that matter in itself is always hiding behind the veil of our descriptions of it.

But do not despair. Note what has been happening here. The description of matter as particles was helpful, but not exactly correct. The description of matter as a wavefunction is even more accurate, but it has limitations. Our best current description of matter is in terms of quantum fields, but the quantum fields are not yet the thing in itself.

At each stage, our description of matter has become more nuanced, more widely applicable, and more useful. Will this process come to an end? Will we ever arrive at the one true description of the basic constituents of the universe?

Who's to say? But as long as each generation outdoes the previous one, what more could we want?

dan_sperber's picture
Social and Cognitive Scientist; CEU Budapest and CNRS Paris; Co-author (with Deirdre Wilson), Meaning and Relevance; and (with Hugo Mercier), The Enigma of Reason

What were Darwin’s most significant contributions? Ernst Mayr answered: (1) producing massive evidence of evolution, (2) explaining it in terms of natural selection, and (3) thinking about species as populations.

“Population thinking”? Philosophers are still debating what this means. For scientists, however, to think of living things in terms of populations rather than types having each its own essence is a clear and radical departure both from earlier scholarly traditions and from folk biology.

Species evolve, early features may disappear, novel features may appear. From a populationist point of view, a species is a population of organisms that share features not because of a common “nature” but because they are related by descent. A species so understood is a temporally continuous, spatially scattered entity that changes over time.

Population thinking readily extends beyond biology to the study of cultural evolution (as argued by Peter Richerson, Robert Boyd, and Peter Godfrey-Smith). Cultural phenomena can be thought of as populations, the members of which share features because they influence one another even though they do not beget one another the way organisms do and are not exactly copies of one another. Here are three examples.

What is a word, the word “love” for instance? It is standardly described as an elementary unit of language that combines sound and meaning. Yes, but a word so understood is an abstraction without causal powers. Only concrete uses of the word “love” have causes and effects: An utterance of the word has, among its causes, mental processes in a speaker and, among its effects, mental processes in a listener (not to mention hormonal and other bio-chemical processes). This speech event is causally linked, on another time scale, to earlier similar events from which the speaker and listener acquired their ability to produce and interpret “love” the way they do. The word endures and changes in a linguistic community through all these episodes of acquisition and use.

So, the word “love” can be studied as a population of causally related events taking place inside people and in their shared environment, a population of billions and billions of such events, each occurring in a different context, each conveying a meaning appropriate at that instant, and all nevertheless causally related. Scholarly or lay discussions about the word “love” and its meaning are themselves a population of mental and public meta-linguistic events evolving on the margins of the “love” population. All words can similarly be thought of not, or not just, as abstract units of language, but as populations of mental and public events.

What is a dance? Take tango. There are passionate arguments about the true character of tango. From a populationist perspective, tango should be thought of as a population of events of producing tango music, listening and dancing to it, watching others dance, commenting, and discussing the music and the dance, a population that originated in the 1880s in Argentina and spread around the world. The real question is not what is a true tango, but how attributions or denials of authenticity evolve on the margins of this population of acoustic, bodily, mental, and social “tango” events. All culturally identified dances can be thought of as populations in the same way.

What is a law? Take the United States Constitution. It is commonly thought of as a text or, more accurately, since it has been repeatedly amended, as a text with several successive versions. Each version has millions of paper and now electronic copies. Each article and amendment has been interpreted on countless occasions in a variety of ways. Many of these interpretations have been quoted again and again, and reinterpreted, and their reinterpretations reinterpreted in turn. Articles, amendments, and interpretations have been invoked in a variety situations. In other words, there is a population the members of which are all these objects and events in the environment plus all the relevant mental representations and processes in the brain of the people who have produced, interpreted, invoked or otherwise considered versions and bits of the Constitution. All the historical effects of the Constitution have been produced by members of this population of material things and not by the Constitution considered in the abstract. The Constitution, then, can usefully be thought of as a population, and so can all laws.

Population thinking is itself a population of mental and public things. Philosophers’ discussions of what population thinking really is are members of this population. So is the text you just read, and so is your reading of it.

roger_schank's picture
CEO, Socratic Arts Inc.; John Evans Professor Emeritus of Computer Science, Psychology and Education, Northwestern University; Author, Make School Meaningful-And Fun!

We do case-based reasoning all the time, without thinking that that is what we are doing. Case-based reasoning is essential to personal growth and learning. While we hear people proclaim that mathematics teaches one to think, or knowing logic will help one reason more carefully, humans do a different kind of reasoning quite naturally.

When we go to a restaurant, we think about what we ordered the last time we were there and whether we want to order the same thing again. When we go out on a date, we think about how that person reminds us of someone we went out with before, and we think about how that turned out. When we read a book we are reminded of other books with similar themes or similar situations and we tend to predict outcomes on that basis. When we hear someone tell us a story about their own lives, we are immediately reminded of something similar that has happened to us.

Reminding, based on the examination of a internal library of cases, is what enables learning and is the basis of intelligence. In order to get reminded of relevant prior cases, we create those cases subconsciously by thinking about them and telling someone about them. Then, again subconsciously, we label the previously experienced cases in some way. The classic example of this is the steak and the haircut, a story about a colleague of mine who responded to my complaint about the fact that my wife couldn’t cook steak as rare as I wanted it by saying that twenty years earlier, in London, he couldn’t get his hair cut as short as he wanted it. While this may sound like a brain-damaged response, these two stories are identical at the right level of abstraction. They are both about asking someone to do something who, while being capable of doing it, has refused to do it, because they thought the request was too extreme. My friend had been wondering about his haircut experience for twenty years, My story reminded him of his own experience and helped him to explain to himself what had happened.

We are all case-based reasoners, but no one ever teaches us how to do this (except possibly in medical school, business school, and in law school.) No one teaches you how to label cases or how to retrieve cases from your own memory. Yet our entire ability to reason depends upon this capability. We need to see something as an instance of something we have seen before in order to make a judgment about it and in order to learn from it.

We do case-based reasoning naturally, and without conscious thought, so we tend to ignore its importance in thinking and learning. Whenever you participate in a conversation with someone about a subject of mutual interest you are having a kind of case-based reasoning party: exchanging stories and constantly being reminded of new stories to tell. Both participants come out slightly changed from the experience. That experience itself is, of course, a new case to be remembered and to be reasoned from in the future. 

diana_deutsch's picture
Professor of Psychology, University of California, San Diego; Author, Musical Illusions and Phantom Words

The concept of an illusory conjunction is not sufficiently explored in studies of perception and memory, and it is rarely discussed in philosophy. Yet this concept is of considerable importance to our understanding of perceptual and cognitive function. For example, when we hear a musical tone, we attribute a pitch, a loudness, a timbre, and we hear the tone as coming from a particular spatial location; so each perceived tone can be described as a bundle of attribute values. It is generally assumed that this bundle reflects the characteristics and location of the sound that is emitted. However, when multiple sequences of tones arise simultaneously from different regions of space, these bundles of attribute values sometimes fragment and recombine incorrectly, so that illusory conjunctions result.

This gives rise to several illusions of sound perception, such as the octave illusion and the scale illusion, in which the melodies we "hear" are quite different from those that are presented. The effect can even be found in live musical performances—for example in the final movement of Tchaikovsky’s 6th Symphony. Illusory conjunctions can also occur in vision. Under certain circumstances when people are shown several colored letters and asked to report what they saw, they sometimes combine the colors and shapes of the letters incorrectly—for example, when presented with a blue cross and a red circle, viewers sometimes report seeing a red cross and a blue circle instead.

Hallucinations—both auditory and visual—frequently involve illusory conjunctions. For example, in musical hallucinations many aspects of a piece of music may be heard accurately in detail, while some aspect is altered or appears corrupted. For example, a familiar piece of music may be "heard" as played by a different or even unknown musical instrument, as transposed to a different pitch range, or as played much faster or slower than it should be. In vision, hallucinated faces may be "seen" to have inappropriate components—in one report a woman’s face appeared with a long white Santa Claus beard attached.  

Presumably, when we see and hear in the normal way we process the information in modules or circuits that are each specific to some attribute, and we combine the outputs of these circuits so as to obtain the final integrated percept. Usually this process leads to veridical perception, but under certain circumstances—such as in some orchestral music, or during hallucinations, this process breaks down—and our percepts are influenced by illusory conjunctions. An understanding of how this happens could shed valuable light on perceptual and cognitive processing in general.

donald_d_hoffman's picture
Cognitive Scientist, UC, Irvine; Author, The Case Against Reality

The most famous case study in science, prior to Freud, was published in 1728 in the Philosophical Transactions of the Royal Society by the English surgeon William Cheselden, who attended Newton in his final illness. It bore a snappy title: “An Account of Some Observations Made by a Young Gentleman, Who Was Born Blind, or Lost His Sight so Early, That He Had no Remembrance of Ever Having Seen, and Was Couch’d between 13 and 14 Years of Age.” 

The poor boy “was couch’d”—his cataracts removed—without anesthesia. Cheselden reported what he then saw:

When he first saw, he was so far from making any Judgment about Distances, that he thought all Objects whatever touch’d his Eyes, (as he express’d it) as what he felt, did his Skin . . .  We thought he soon knew what Pictures represented, which were shew’d to him, but we found afterwards we were mistaken; for about two Months after he was couch’d, he discovered at once, they represented solid Bodies.

The boy saw, at first, patterns and colors pressed flat upon his eyes. Only weeks later did he learn to perform the magic that we daily take for granted: to inflate a flat pattern at the eye into a three-dimensional world. 

The image at the eye has but two dimensions. Our visual world, vividly extending in three dimensions, is our holographic construction. We can catch ourselves in the act of holography each time we view a drawing of a Necker cube—a few lines on paper which we see as a cube, enclosing a volume, in three dimensions. That cubic volume in visual space is, of course, virtual. No one tries to use it for storage. But most of us—both lay and vision-science expert—believe that volumes in visual space usually depict, with high fidelity, the real volumes of physical space, volumes which can properly be used for storage.

But physics has a surprise. How much information can you store in a volume of physical space? We learn from the pioneering work of physicists such as Gerard 't Hooft, Leonard Susskind, Jacob Bekenstein, and Stephen Hawking, that the answer depends not on volume but on area. For instance, the amount of information you can store in a sphere of physical space depends only on the area of the sphere. Physical space, like visual space, is holographic.

Consider one implication. Take a sphere that is, say, one meter across. Pack it with six identical spheres that just fit inside. Those six spheres, taken together, have about half the volume of the big sphere, but about 3 percent more area. This means that you can cram more information into six smaller spheres than into one larger sphere that has twice their volume. Now, repeat this procedure with each of the smaller spheres, packing it with six smaller spheres that just fit. And then do this, recursively, a few hundred times. The many tiny spheres that result have an infinitesimal volume, but can hold far more information than the original sphere. 

This shatters our intuitions about space, and its contents. It is natural to assume that spacetime is a fundamental reality. But the holographic principle, and other recent discoveries in physics, tell us that spacetime is doomed—along with the objects it contains and their appearance of physical causality—and must be replaced by something more fundamental if we are to succeed, for instance, in the quest for a theory of quantum gravity. 

If spacetime is not fundamental, then our perception of visual space, and of objects in that space, is not a high-fidelity reconstruction of fundamental reality. What, then, is it? From the theory of evolution we can conclude that our sensory systems have been shaped by natural selection to inform us about the fitness contingencies relevant to us in our niche. We have assumed that this meant that our senses inform us of fitness-relevant aspects of fundamental reality. Apparently, they do not. They simply inform us about fitness, not fundamental reality.

In this case, the holographic principle points to a different conception of our perception of visual space. It is not a reconstruction of an objective, and fundamental, physical space. It is simply a communication channel for messages about fitness, and should be understood in terms of concepts that are standard for any communication channel, concepts such as data compression and error correction. If our visual space is simply the format of an error-correcting code for fitness, this would explain its holographic nature. Error correcting codes introduce redundancy to permit correction of errors. If I wish to send you a bit that is either 0 or 1, but there is a chance that noise might flip a 0 to a 1 or vice versa, then we might agree that I will send you that bit three times, rather than just once. This is a simple Hamming code. If you receive a 000 or a 111, you will interpret this, respectively, as 0 and 1. If you receive a 110 or 001, you will interpret this, respectively, as 1 and 0, correcting an error in transmission. In this case we use a redundant, three-dimensional format to convey a lower-dimensional signal. The holographic redundancy in our perception of visual space might be a clue that this space, likewise, is simply an error-correcting code—for fitness. 

What about physical space? Research by Fernando Pastawski, Beni Yoshida, Daniel Harlow, John Preskill, and others indicates that spacetime itself is an error-correcting code—a holographic, quantum, secret-sharing code. Why this should be so is, for now, unclear, and tantalizing.

But it is clear that the holographic principle has the power to shatter false convictions, stir dogmatic slumbers, and push on where intuitions fear to tread. That is why we do science. 

michael_shermer's picture
Publisher, Skeptic magazine; Monthly Columnist, Scientific American; Presidential Fellow, Chapman University; Author, Heavens on Earth

One of the most understated effects in all cognitive science is the psychology behind why negative events, emotions, and thoughts trump by a wide margin those that are positive. This bias was discovered and documented by the psychologists Paul Rozin and Edward Royzman in 2001, showing that across almost all domains of life, we seem almost preternaturally pessimistic:

• Negative stimuli command more attention than positive stimuli. In rats, for example, negative tastes elicit stronger responses than positive tastes. And in taste aversion experiments a single exposure to a noxious food or drink can cause lasting avoidance of that item, but there is no corresponding parallel with good tasting food or drinks.

• Pain feels worse than no pain feels good. That is, as the philosopher Arthur Schopenhauer put it, “we feel pain, but not painlessness.” There are erogenous zones, Rozin and Royzman point out, but no corresponding torturogenous zones.

• Picking out an angry face in a crowd is easier and faster to do than picking out a happy face.

• Negative events lead us to seek causes more readily than do positive events. Wars, for example, generate endless analyses in books and articles, whereas peace literature is paltry by comparison.

• We have more words to describe the qualities of physical pain (deep, intense, dull, sharp, aching, burning, cutting, pinching, piercing, tearing, twitching, shooting, stabbing, thrusting, throbbing, penetrating, lingering, radiating, etc.) than we have to describe physical pleasure (intense, delicious, exquisite, breathtaking, sumptuous, sweet, etc.).

• There are more cognitive categories for and descriptive terms of negative emotions than positive. As Leo Tolstoy famously observed in 1875: “Happy families are all alike; every unhappy family is unhappy in its own way.”

• There are more ways to fail than there are to succeed. It is difficult to reach perfection and the paths to it are few, but there are many ways to fail to achieve perfection and the paths away from it are many.

• Empathy is more readily triggered by negative stimuli than positive: People identify and sympathize with others who are suffering or in pain more than they do others who are in a state happier or better off than them.

• Evil contaminates good more than good purifies evil. As the old Russian proverb says, “A spoonful of tar can spoil a barrel of honey, but a spoonful of honey does nothing for a barrel of tar.” In India, members of the higher castes may be considered contaminated by eating food prepared by members of the lower castes, but those in the lower castes do not receive an equivalent rise upward in purity status by eating food prepared by their upper caste counterparts.

• The notorious “one drop of blood” rule of racial classification has its origin in the Code Noir, or “Negro Code” of 1685, meant to guarantee the purity of the White race by screening out the tainted blood, whereas, note Rozin and Royzman, “there exists no historical evidence for the positive equivalent of a ‘one-drop’ ordinance—that is, a statute whereby one’s membership in a racially privileged class would be assured by one’s being in possession of ‘one drop’ of the racially superior blood.”

• In religious traditions, possession by demons happens quickly compared to the exorcism of demons, which typically involves long and complex rituals; by contrast in the positive direction, becoming a saint requires a life devoted to holy acts, which can be erased overnight by a single immoral act. In the secular world, decades of devoted work for public causes can be erased in an instant with an extra-marital affair, financial scandal, or criminal act.

Why is negativity stronger than positivity? Evolution. In the environment of our evolutionary ancestry there was an asymmetry of payoffs in which the fitness cost of overreacting to a threat was less than the fitness cost of underreacting, so we err on the side of overreaction to negative events. The world was more dangerous in our evolutionary past, so it paid to be risk averse and highly sensitive to threats, and if things were good then taking a gamble to improve them a little bit more was not perceived to be worth the risk of things turning south for the worst. 

adam_alter's picture
Psychologist; Assistant Professor of Marketing, Stern School of Business, NYU; Author, Irresistible

In 1832, a Prussian military analyst named Carl von Clausewitz explained that “three quarters of the factors on which action in war is based are wrapped in a fog of . . . uncertainty.” The best military commanders seemed to see through this “fog of war,” predicting how their opponents would behave on the basis of limited information. Sometimes, though, even the wisest generals made mistakes, divining a signal through the fog when no such signal existed. Often, their mistake was endorsing the law of small numbers—too readily concluding that the patterns they saw in a small sample of information would also hold for a much larger sample.

Both the Allies and Axis powers fell prey to the law of small numbers during World War II. In June 1944, Germany flew several raids on London. War experts plotted the position of each bomb as it fell, and noticed one cluster near Regent’s Park, and another along the banks of the Thames. This clustering concerned them, because it implied that the German military had designed a new bomb that was more accurate than any existing bomb. In fact, the Luftwaffe was dropping bombs randomly, aiming generally at the heart of London but not at any particular location over others. What the experts had seen were clusters that occur naturally through random processes—misleading noise masquerading as a useful signal.

That same month, German commanders made a similar mistake. Anticipating the raid later known as D-Day, they assumed the Allies would attack—but they weren’t sure precisely when. Combing old military records, a weather expert named Karl Sonntag noticed that the Allies had never launched a major attack when there was even a small chance of bad weather. Late May and much of June were forecast to be cloudy and rainy, which “acted like a tranquilizer all along the chain of German command,” according to Irish journalist Cornelius Ryan. “The various headquarters were quite confident that there would be no attack in the immediate future. . . . In each case conditions had varied, but meteorologists had noted that the Allies had never attempted a landing unless the prospects of favorable weather were almost certain.” The German command was mistaken, and on Tuesday, June 6, the Allied forces launched a devastating attack amidst strong winds and rain.

The British and German forces erred because they had taken a small sample of data too seriously: The British forces had mistaken the natural clustering that comes from relatively small samples of random data for a useful signal, while the German forces had mistaken an illusory pattern from a limited set of data for evidence of an ongoing, stable military policy. To illustrate their error, imagine a fair coin tossed three times. You’ll have a one-in-four chance of turning up a string of three heads or tails, which, if you make too much of that small sample, might lead you to conclude that the coin is biased to reveal one particular outcome all or almost all of the time. If you continue to toss the fair coin, say, a thousand times, you’re far more likely to turn up a distribution that approaches five hundred heads and five hundred tails. As the sample grows, your chance of turning up an unbroken string shrinks rapidly (to roughly one-in-sixteen after five tosses; one-in-five-hundred after ten tosses; and one-in-five-hundred-thousand after twenty tosses). A string is far better evidence of bias after twenty tosses than it is after three tosses—but if you succumb to the law of small numbers, you might draw sweeping conclusions from even tiny samples of data, just as the British and Germans did about their opponents’ tactics in World War II.

Of course, the law of small numbers applies to more than military tactics. It explains the rise of stereotypes (concluding that all people with a particular trait behave the same way); the dangers of relying on a single interview when deciding among job or college applicants (concluding that interview performance is a reliable guide to job or college performance at large); and the tendency to see short-term patterns in financial stock charts when in fact short-term stock movements almost never follow predictable patterns. The solution is to pay attention not just to the pattern of data, but also to how much data you have. Small samples aren’t just limited in value; they can be counterproductive because the stories they tell are often misleading.

steven_pinker's picture
Johnstone Family Professor, Department of Psychology; Harvard University; Author, Rationality

The Second Law of Thermodynamics states that in an isolated system (one that is not taking in energy), entropy never decreases. (The First Law is that energy is conserved; the Third, that a temperature of absolute zero is unreachable.) Closed systems inexorably become less structured, less organized, less able to accomplish interesting and useful outcomes, until they slide into an equilibrium of gray, tepid, homogeneous monotony and stay there.

In its original formulation the Second Law referred to the process in which usable energy in the form of a difference in temperature between two bodies is dissipated as heat flows from the warmer to the cooler body. Once it was appreciated that heat is not an invisible fluid but the motion of molecules, a more general, statistical version of the Second Law took shape. Now order could be characterized in terms of the set of all microscopically distinct states of a system: Of all these states, the ones that we find useful make up a tiny sliver of the possibilities, while the disorderly or useless states make up the vast majority. It follows that any perturbation of the system, whether it is a random jiggling of its parts or a whack from the outside, will, by the laws of probability, nudge the system toward disorder or uselessness. If you walk away from a sand castle, it won’t be there tomorrow, because as the wind, waves, seagulls, and small children push the grains of sand around, they’re more likely to arrange them into one of the vast number of configurations that don’t look like a castle than into the tiny few that do.

The Second Law of Thermodynamics is acknowledged in everyday life, in sayings such as “Ashes to ashes,” “Things fall apart,” “Rust never sleeps,” “Shit happens,” You can’t unscramble an egg,” “What can go wrong will go wrong,” and (from the Texas lawmaker Sam Rayburn), “Any jackass can kick down a barn, but it takes a carpenter to build one.”

Scientists appreciate that the Second Law is far more than an explanation for everyday nuisances; it is a foundation of our understanding of the universe and our place in it. In 1915 the physicist Arthur Eddington wrote:

The law that entropy always increases holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations—then so much the worse for Maxwell's equations. If it is found to be contradicted by observation—well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.

In his famous 1959 lecture “The Two Cultures and the Scientific Revolution,” the scientist and novelist C. P. Snow commented on the disdain for science among educated Britons in his day:

A good many times I have been present at gatherings of people who, by the standards of the traditional culture, are thought highly educated and who have with considerable gusto been expressing their incredulity at the illiteracy of scientists. Once or twice I have been provoked and have asked the company how many of them could describe the Second Law of Thermodynamics. The response was cold: it was also negative. Yet I was asking something which is the scientific equivalent of: Have you read a work of Shakespeare's?

And the evolutionary psychologists John Tooby, Leda Cosmides, and Clark Barrett entitled a recent paper on the foundations of the science of mind “The Second Law of Thermodynamics is the First Law of Psychology.”

Why the awe for the Second Law? The Second Law defines the ultimate purpose of life, mind, and human striving: to deploy energy and information to fight back the tide of entropy and carve out refuges of beneficial order. An underappreciation of the inherent tendency toward disorder, and a failure to appreciate the precious niches of order we carve out, are a major source of human folly.

To start with, the Second Law implies that misfortune may be no one’s fault. The biggest breakthrough of the scientific revolution was to nullify the intuition that the universe is saturated with purpose: that everything happens for a reason. In this primitive understanding, when bad things happen—accidents, disease, famine—someone or something must have wanted them to happen. This in turn impels people to find a defendant, demon, scapegoat, or witch to punish. Galileo and Newton replaced this cosmic morality play with a clockwork universe in which events are caused by conditions in the present, not goals for the future. The Second Law deepens that discovery: Not only does the universe not care about our desires, but in the natural course of events it will appear to thwart them, because there are so many more ways for things to go wrong than to go right. Houses burn down, ships sink, battles are lost for the want of a horseshoe nail.

Poverty, too, needs no explanation. In a world governed by entropy and evolution, it is the default state of humankind. Matter does not just arrange itself into shelter or clothing, and living things do everything they can not to become our food. What needs to be explained is wealth. Yet most discussions of poverty consist of arguments about whom to blame for it.

More generally, an underappreciation of the Second Law lures people into seeing every unsolved social problem as a sign that their country is being driven off a cliff. It’s in the very nature of the universe that life has problems. But it’s better to figure out how to solve them—to apply information and energy to expand our refuge of beneficial order—than to start a conflagration and hope for the best. 

elizabeth_wrigley_field's picture
Assistant Professor, Department of Sociology, University of Minnesota-Twin Cities; Faculty Member, Minnesota Population Center

Here are three puzzles.

  •  American fertility fluctuated dramatically in the decades surrounding the Second World War. Parents created the smallest families during the Great Depression, and the largest families during the postwar Baby Boom. Yet children born during the Great Depression came from larger families than those born during the Baby Boom. How can this be?
     
  • About half of the prisoners released in any given year in the United States will end up back in prison within five years. Yet the proportion of prisoners ever released who will ever end up back in prison, over their whole lifetime, is just one third. How can this be?
  • People whose cancers are caught early by random screening often live longer than those whose cancers are detected later, after they are symptomatic. Yet those same random screenings might not save any lives. How can this be?

And here is a twist: these are all the same puzzle.

The solution is adopting the right perspective. Consider the family puzzle. One side is about what parents do; the other side is about what kids experience. If families all had the same number of kids, these perspectives would coincide: The context parents create is the context kids live in. But when families aren’t all the same size, it matters whose perspective you take.

Imagine trying to figure out the average family size in a particular neighborhood. You could ask the parents how many kids they have. Big families and small families will count equally. Or you could ask the children how many siblings they have. A family with five kids will show up in the data five times, and childless families won’t show up at all. The question is the same: How big is your family? But when you ask kids instead of parents, the answers are weighted by the size of the family. This isn’t a data error so much as a trick of reality: The average kid actually has a bigger family than the average parent does. And (as the great demographer Sam Preston has pointed out), during the Great Depression, when families were either very small or very large, this effect was magnified—so the average child came from a very large family even though the average adult produced a small family.

The recidivism puzzle is the family puzzle on a slant. When we look at released prisoners at a moment in time, we see the ones who leave prison most often—which are also the ones who return most often. We see, as Williams Rhodes and his colleagues recently pointed out, the repeat offenders. Meanwhile, the population that ever leaves prison has 2-to-1 odds of never going back.

Snapshots bias samples: When some people experience something—like a prison release—more often than others, looking at a random moment in time guarantees a non-random assortment of people.

And the cancer screenings? Screenings reveal cancer at an intermediary stage—when it is advanced enough to be detectable, but not so advanced that the patient would have shown up for testing without being screened. And this intermediary, detectable stage generally lasts longer for cancers that spread slowly. The more time the cancer spends in the detectable stage, the more likely it is to be detected. So the screenings disproportionately find the slower-growing, less-lethal cancers, whether or not early detection does anything to diminish their lethality. Assigning screenings randomly to people necessarily assigns screenings selectively to tumor types.

The twist in these puzzles is “length-biased sampling”: it’s when we see clusters in proportion to their size. Length-biased sampling reveals how lifespans—of people, of post-prison careers, of diseases—bundle time the way families bundle children.

All this may seem like a methodological point, and indeed, researchers go awry when we ask about one level but unwittingly answer about another. But length-biased sampling also explains how our social positions can give us very different experiences of the world—as when, if a small group of men each harasses many women, few men know a harasser, but many women are harassed.

Most fundamentally, length-biased sampling is the deep structure of nested categories. It’s not just that the categories can play by different rules, but that they must.

Consider again those differently sized families, now stretching out over generations. If we each had the same number of children as our parents did, small families would beget a small number of new small families—and large families would beget larger and larger numbers of families with many children of their own. With each passing generation, the larger a family is, the more common families of its size would become. The mushrooming of families with many kids would sprout into wild, unchecked population growth.

This implies that, as Preston’s analysis of family sizes showed us, population stability requires family instability: If the population is to stay roughly the same size, most children must grow up to have fewer of their own kids than their parents did, each generation rejecting tradition anew. And indeed, most of us do. Adages about the rebelliousness of youth may have their roots in culture or in developmental psychology, yes, but their truth is also demographic: Between a whole country and the families it comprises, stability can occur at one level or the other, but never both.

Categories nestle inside one another, tumor inside person inside family inside nation. They nestle not as Russian dolls, a regress of replicas, but rather layered like rock and soil, each layer composing the world differently. Whether we see equality or divergence, stasis or change, depends in part on the level at which we look. Length-biased sampling is the logic that links the levels into solid ground, and the tunnel that lets us walk between them.

alison_gopnik's picture
Psychologist, UC, Berkeley; Author, The Gardener and the Carpenter

Imagine that an Alpha Centauran scientist came to Earth 150,000 years ago. She might note, in passing, that the newly evolved Homo sapiens were just a little better at tool use, cooperation, and communication than their primate relatives. But, as a well-trained evolutionary biologist, she would be far more impressed by their remarkable and unique “life history.”

“Life history” is the term biologists use to describe how organisms change over time—how long an animal lives, how long a childhood it has, how it nurtures its young, how it grows old. Human life history is weird. We have a much longer childhood than any other primate—twice as long as chimps, and that long childhood is related to our exceptional learning abilities. Fossil teeth suggest that this long childhood evolved in tandem with our big brains—we even had a longer childhood than Neanderthals. We also rapidly developed special adaptations to care for those helpless children—“pair-bonding” and “alloparents.” Fathers and unrelated kin help take care of human children, unlike our closest primate relatives.

And we developed another very unusual life history feature—post-menopausal grandmothers. The killer whale is the only other animal we know that outlives its fertility. The human lifespan was expanded at both ends—longer childhood and a longer old age. In fact, anthropologists have argued that those grandmothers were a key to the evolution of learning and culture. They were crucial for the survival of those helpless children and they also could pass on two generations worth of knowledge.

Natural selection often operates on “life history” characteristics, and life history plays an important role in evolution, in general. Biologists long distinguished between “K” species and “R” species. R species, most fish, for example, may produce thousands of offspring, but most of them die and the rest live only a short time. In contrast “K” species like primates and whales, have only a few babies, invest a great deal in their care, and live a long time. Generally speaking, a “K” life history strategy is correlated with a larger brain and higher intelligence. We are the ultimate “K” species. 

“Life history” is also important because it is especially responsive to information from the environment, not only over evolutionary time but also in the lifetime of a single animal. Tiny water fleas develop a helmet when they mature to protect them from certain predators. When the babies, or even their pregnant mothers, detect more predators in the environment, they speed up the developmental process—they grow helmets earlier and make them larger, even at cost to other functions. In the same way, in other animals, including human beings, early stress triggers a "live fast, die young” life history. Young animals who detect a poor and risky environment grow up more quickly and die sooner.

Our unique human developmental trajectory has cumulatively led to much bigger differences in the way we live and behave. One hundred and fifty thousand years ago the Alpha Centauran biologist wouldn’t have seen much difference between adult humans and our closest primate relatives—art, trade, religious ritual, and complex tools were still far in the future, not to mention agriculture and technology. Our long childhood, and our extended investment in our children, allowed those changes to happen—think of all those grandmothers passing on the wisdom of the past to a new generation of children. Each human generation had a chance to learn a little more about the world from their caregivers, and to change the world a little more themselves. If the Alpha Centuaran biologist made a return visit now, she would record the startling human achievements that have come from this long process of cultural evolution.

Evolutionary psychologists have tended to focus on adult men—hunting and fighting got a lot more attention than caregiving. We’ve all seen the canonical museum diorama of the mighty early human hunters bringing down the mastodon. But the children and grandmothers lurking in the background were just as important parts of the story.

You still often read psychological theories that describe both the young and the old in terms of their deficiencies, as if they were just preparation for, or decline from, an ideal grown-up human. But new studies suggest that both the young and the old may be especially adapted to receive and transmit wisdom. We may have a wider focus and a greater openness to experience when we are young or old than we do in the hurly-burly of feeding, fighting and reproduction that preoccupies our middle years.

“Life history” is an important idea in evolution, especially human evolution. But it also gives us a richer way of thinking about our own lives. A human being isn't just a collection of fixed traits, but part of an unfolding and dynamic story. And that isn’t just the story of our own lives, caregiving and culture link us both to the grandparents who were there before we were born and the grandchildren who will carry on after we die.

nicholas_g_carr's picture
Author, Utopia is Creepy

By leaps, steps, and stumbles, science progresses. Its seemingly inexorable advance promotes a sense that everything can be known and will be known. Through observation and experiment, and lots of hard thinking, we will come to explain even the murkiest and most complicated of nature’s secrets: consciousness, dark matter, time, the full story of the universe.

But what if our faith in nature’s knowability is just an illusion, a trick of the overconfident human mind? That’s the working assumption behind a school of thought known as mysterianism. Situated at the fruitful if sometimes fraught intersection of scientific and philosophic inquiry, the mysterianist view has been promulgated, in different ways, by many respected thinkers, from the philosopher Colin McGinn to the cognitive scientist Steven Pinker. The mysterians propose that human intellect has boundaries and that some of nature’s mysteries may forever lie beyond our comprehension.

Mysterianism is most closely associated with the so-called hard problem of consciousness: How can the inanimate matter of the brain produce subjective feelings? The mysterians argue that the human mind may be incapable of understanding itself, that we will never understand how consciousness works. But if mysterianism applies to the workings of the mind, there’s no reason it shouldn’t also apply to the workings of nature in general. As McGinn has suggested, “It may be that nothing in nature is fully intelligible to us.”

The simplest and best argument for mysterianism is founded on evolutionary evidence. When we examine any other living creature, we understand immediately that its intellect is limited. Even the brightest, most curious dog is not going to master arithmetic. Even the wisest of owls knows nothing of the anatomy of the field mouse it devours. If all the minds that evolution has produced have bounded comprehension, then it’s only logical that our own minds, also products of evolution, would have limits as well. As Pinker has observed, “The brain is a product of evolution, and just as animal brains have their limitations, we have ours.” To assume that there are no limits to human understanding is to believe in a level of human exceptionalism that seems miraculous, if not mystical.

Mysterianism, it’s important to emphasize, is not inconsistent with materialism. The mysterians don’t suggest that what’s unknowable must be spiritual. They posit that matter itself has complexities that lie beyond our ken. Like every other animal on earth, we humans are just not smart enough to understand all of nature’s laws and workings.

What’s truly disconcerting about mysterianism is that, if our intellect is bounded, we can never know how much of existence lies beyond our grasp. What we know or may in the future know may be trifling compared with the unknowable unknowns. “As to myself,” remarked Isaac Newton in his old age, “I seem to have been only like a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.” It may be that we are all like that child on the strand, playing with the odd pebble or shell—and fated to remain so.

Mysterianism teaches us humility. Through science, we have come to understand much about nature, but much more may remain outside the scope of our perception and comprehension. If the mysterians are right, science’s ultimate achievement may be to reveal to us its own limits.

nicholas_humphrey's picture
Emeritus Professor of Psychology, London School of Economics; Visiting Professor of Philosophy, New College of the Humanities; Senior Member, Darwin College, Cambridge; Author, Soul Dust

It is commonly assumed that when people make a free choice in an election, the outcome will be what those on the winning side intended. But there are two factors known to cognitive science—but probably not known to politicians—which may well render this assumption false.

The first is the fact of “referential opacity,” as it applies to mental states. A peculiar characteristic of mental states—such as believing, wanting, remembering—is that they do not conform to Leibniz’s law. This law states that if two things, A and B, are identical, then in any true statement about A, you can replace A by B, and the new statement will also be true. So, if it’s true A weighs five kilos, it must be true B weighs five kilos, if it’s true A lives in Cambridge, it must be true B lives in Cambridge, and so on. The strange thing is however, that, when it comes to mental states, this substitution no longer works. Suppose, for example, the Duke of Clarence and Jack the Ripper were one and the same person. It could be true you believe the Duke of Clarence was Queen Victoria’s grandson, but not true you believe Jack the Ripper was her grandson.

It’s called referential opacity, because the identity of the referents in the two linked mental states is not transparent to the subject. It may seem and abstruse concept, but its implications are profound. For one thing, it sets clear limits to what people really intend by their words or actions, and therefore on their responsibility for the outcome. Take the case of Oedipus. While it’s true that Oedipus decided to marry Jocasta, it’s presumably not true he decided to marry his mother, even though Jocasta and his mother were identical. So Oedipus was wrong to blame himself. Or take the case of Einstein and the atomic bomb. While it’s true that Einstein was happy to discover that E=mc2, it’s far from true he was happy to discover the formula for making a nuclear weapon, even if E=mc2 is that formula. He said, “If I had known I would have been a clockmaker.” But he did not know.

Now, what about voters’ intentions in elections? Referential opacity can explain why there is so often a mismatch between what voters want and what they get. Take the German elections in 1933. While it’s true that 44% of voters wanted Hitler to become Chancellor, it’s clearly not true they wanted the man who would ruin Germany to become Chancellor, even though Hitler was that very man.

So, let’s turn to the second factor that can cause confusion about the real intentions of voters. This is the phenomenon of “choice blindness,” discovered by Lars Hall and Peter Johansson. In a classic experiment these researchers asked a male subject to choose which he liked better of two photos of young women. They then handed the chosen photo to the subject, and asked him to explain the reasons for his choice. But they had secretly switched the photos, so that the subject was actually given the photo he did not choose. Remarkably most subjects did not recognise the switch, and proceeded unperturbed to give reasons for the choice they had not made. Hall and Johansson conclude that people’s overriding need to maintain a consistent narrative can trump the memory of what has actually occurred.

Consider, then, what may happen when, in the context of an election, choice blindness combines with referential opacity. Suppose the majority of voters choose to elect Mr A, with never a thought to electing Mr B. But, after the election, it transpires that, unwittingly and unaccountably, they have in fact got Mr B in place of Mr A. Now people’s need to remain on plot and to make sense of this outcome leads them to rewrite history and persuade themselves it was Mr B they wanted all along.

I’m not saying the vaunted “democratic choice,” the “will of the people,” is a mirage. I am saying it would be as well if the wider public knew that scientists say it should be taken with a pinch of salt.

richard_nisbett's picture
Theodore M. Newcomb Distinguished University Professor of Psychology, University of Michigan; Author, Thinking: A Memoir

Aristotle taught that a stone sinks when dropped into water because it has the property of gravity. Of course, not everything sinks when dropped into water. A piece of wood floats, because it has the property of levity. People who behave morally do so because they have the property of virtue; people who don’t behave morally lack that property.           

Molière lampoons this way of thinking by having a team of physicians in The Imaginary Invalid explain why opium induces sleep, namely because of its dormative power.           

Lampoon or not, most of us think about the behavior of objects and people much of the time in purely dispositional terms. It is properties possessed by the object or person that explain its behavior. Modern physics replaced Aristotle’s dispositional thinking by describing all motion as being due to the properties of an object interacting in particular ways with the field in which it is located.

Modern scientific psychology insists that explanation of the behavior of humans always requires reference to the situation the person is in. The failure to do so sufficiently is known as the Fundamental Attribution Error. In Milgram’s famous obedience experiment, two-thirds of his subjects proved willing to deliver a great deal of electric shock to a pleasant-faced middle-aged man, well beyond the point where he became silent after begging them to stop on account of his heart condition. When I teach about this experiment to undergraduates, I’m quite sure I‘ve never convinced a single one that their best friend might have delivered that amount of shock to the kindly gentleman, let alone that they themselves might have done so. They are protected by their armor of virtue from such wicked behavior. No amount of explanation about the power of the unique situation into which Milgram’s subject was placed is sufficient to convince them that their armor could have been breached.

My students, and everyone else in Western society, are confident that people behave honestly because they have the virtue of honesty, conscientiously because they have the virtue of conscientiousness. (In general, non-Westerners are less susceptible to the fundamental attribution error, lacking as they do sufficient knowledge of Aristotle!) People are believed to behave in an open and friendly way because they have the trait of extroversion, in an aggressive way because they have the trait of hostility. When they observe a single instance of honest or extroverted behavior they are confident that, in a different situation, the person would behave in a similarly honest or extroverted way.

In actual fact, when large numbers of people are observed in a wide range of situations, the correlation for trait-related behavior runs about .20 or less. People think the correlation is around .80. In reality, seeing Carlos behave more honestly than Bill in a given situation increases the likelihood that he will behave more honestly in another situation from the chance level of 50 percent to the vicinity of 55-57. People think that if Carlos behaves more honestly than Bill in one situation the likelihood that he will behave more honestly than Bill in another situation is 80 percent!

How could we be so hopelessly miscalibrated? There are many reasons, but one of the most important is that we don’t normally get trait-related information in a form that facilitates comparison and calculation. I observe Carlos in one situation when he might display honesty or the lack of it, and then not in another for perhaps a few weeks or months. I observe Bill in a different situation tapping honesty and then not another for many months.

This implies that if people received behavioral data in such a form that many people are observed over the same time course in a given fixed situation, our calibration might be better. And indeed it is. People are quite well calibrated for abilities of various kinds, especially sports. The likelihood that Bill will score more points than Carlos in one basketball game given that he did in another is about 67 percent—and people think it’s about 67 percent.

Our susceptibility to the fundamental attribution error—overestimating the role of traits and underestimating the importance of situations—has implications for everything from how to select employees to how to teach moral behavior.

james_j_odonnell's picture
Classics Scholar, University Librarian, ASU; Author, Pagans

In this somber time, asking what scientific term or concept ought to be more widely known sounds like the setup for a punchline, something like “2+2=4” or “to every action there is an equal and opposite reaction.” We can make a joke like that, but the truth the joke reveals is that “science” is indeed very much a human conception and construction. Science is all in our minds, even as we see dramatic examples of the use of that science all around us.      

So this year’s question is really a question about where to begin: What is there that we should all know that we don’t know as well as we should, don’t apply to our everyday and extraordinary challenges as tellingly as we could, and don’t pass on to children in nursery rhymes and the like?     

My candidate is an old, simple, and powerful one: the law of regression to the mean. It’s a concept from the discipline of statistics, but in real life it means that anomalies are anomalies, coincidences happen (all the time, with stunning frequency), and the main thing they tell us is that the next thing to happen is very likely to be a lot more boring, ordinary, and predictable. Put in the simplest human terms, it teaches us not to be so excitable, not to be so worried, not to be so excited: Life really will be, for the most part, boring and predictable.      

The ancient and late antique intellectuals whom I spend my life studying wouldn’t talk so much about miracles and portents if they could calm down and think about the numbers. The baseball fans thrilled to see the guy on a hitting streak come to the plate wouldn’t be so disappointed when he struck out. Even people reading election returns would see much more normality lurking inside shocking results than television reporters can admit.      

Heeding the law of regression to the mean would help us slow down, calm down, pay attention to the long term and the big picture, and react with a more strategic patience to crises large and small. We’d all be better off. Now if only I could think of a good nursery rhyme for it . . .

gordon_kane's picture
Theoretical Particle Physicist and Cosmologist; Victor Weisskopf Distinguished University Professor, University of Michigan; Author, Supersymmetry and Beyond

Spontaneous symmetry breaking is widespread and fundamental in physics and science. The most famous occurrence is that it is the mechanism responsible for the importance of Higgs physics, the reason quarks and electrons are allowed to have mass, and for the vacuum of our universe not being nothing. The notion is widespread in condensed matter physics, and indeed was first understood there. But it is much broader, potentially leading to confusion between theories and solutions in many areas.

The basic idea can be explained simply and generally. Suppose a theory is stated in terms of an equation, X times Y =16. For simplicity consider only positive integer values of X, Y as solutions. Then there are three solutions, X=1 and Y=16, X=2 and Y=8, and X=Y=4. What is important is that the theory (XY=16) is symmetric if we interchange X and Y, but some solutions are not. The most famous example is that the theory of the solar system has the Sun at the center and is spherically symmetric, but the planetary orbits are ellipses, not symmetric. The spherical symmetry of the theory misled people to expect circular orbits for centuries. Whenever a symmetric theory has non-symmetric solutions, which is common, it is called spontaneous symmetry breaking.

In this example above, as often in nature, there are several solutions, so more information is needed, either theoretical or experimental, to determine nature’s solution. We could measure one of X or Y and the other is determined. Improving the theory leads to an interesting case. Suppose there is an additional theory equation, X+Y=10, also symmetric if we interchange X and Y so the theory remains symmetric. But now there is a unique solution, X=2, Y=8, and it is not symmetric. In fact, there are no symmetric solutions.

Magnetism is a familiar real world example. The equations describing individual iron atoms don’t distinguish different directions in space. But when a piece of iron is cooled below about 770°C, it spontaneously develops a magnetic field in some direction. The original symmetry between different directions is broken. Describing this is how the name “spontaneous symmetry breaking” originated. In physics what happens is understood—known electromagnetic forces tend to make the spins of individual atoms become parallel, and each spin is a little magnet.

Normally we expect all fields (such as electromagnetic fields) to be zero in the ground state or vacuum or the universe. Otherwise they add energy, and the universe will naturally settle in the state of minimum energy. Now we have learned that the universe is in a lower state of energy when the Higgs field is non-zero than when it is zero, a non-symmetric result, and that is essential for understanding how electrons and quarks get mass. Nature’s solution is a state of reduced symmetry.

In many fields we make theories to describe and explain phenomena. But the behavior of systems is described by the solutions to the theories, not by the theories alone. We saw here that trying to deduce the properties of the solutions, and the behavior of phenomena in sciences and social sciences, and the world in general from the form of the theory can be completely misleading. Another way to view the situation is the reverse perspective: the properties of the theory (such as its symmetries) may be hidden when we only observe the non-symmetric solutions. If it’s described by equations it’s easy to see this, but it’s true much more generally. These ideas should be much better known. 

john_horgan's picture
Director, Center for Science Writings, Stevens Institute of Technology

“Neural code” is by far the most under-appreciated term, and concept, in science. It refers to the rules or algorithms that transform action potentials and other processes in the brain into perceptions, memories, meanings, emotions, intentions, and actions. Think of it as the brain's software.

The neural code is science’s deepest, most consequential problem. If researchers crack the code, they might solve such ancient philosophical mysteries as the mind-body problem and the riddle of free will. A solution to the neural code could also give us unlimited power over our brains and hence minds. Science fiction—including mind control, mind reading, bionic enhancement and even psychic uploading—will become reality. Those who yearn for the Singularity will get their wish.

More than a half-century ago, Francis Crick and others deciphered the genetic code, which underpins heredity and other biological functions. Crick spent his final decades seeking the neural code—in vain, because the most profound problem in science is also by far the hardest. The neural code is certainly not as simple, elegant and universal as the genetic code. Neuroscientists have, if anything, too many candidate codes. There are rate codes, temporal codes, population codes and grandmother-cell codes, quantum and chaotic and information codes, codes based on oscillations and synchronies.

But given the relentless pace of advances in optogenetics, computation and other technologies for mapping, manipulating and modeling brains, a breakthrough could be imminent. Question: Considering the enormous power that could be unleashed by a solution to the neural code, do we really want it solved? 

mario_livio's picture
Astrophysicist; Author, Why?: What Makes Us Curious

Nicolaus Copernicus taught us in the 16th century that we are nothing special, in the sense that the Earth on which we live is not at the center of the solar system. This realization, which embodies a principle of mediocrity on the astrophysical scale, has become known as The Copernican Principle. 

In the centuries that have passed since Copernicus’s discovery, it seems that the Copernican principle has significantly gained strength through a series of steps that have demonstrated that our place in the cosmos is of lesser and lesser importance.

First, astronomer Harlow Shapley showed at the beginning of the 20th century that the solar system is not at the center of the Milky Way galaxy. It is in fact about two thirds of the way out. Second, recent estimates based on searches for extrasolar planets put the number of Earth-size planets in the Milky Way in the billions. A good fraction of those are even in that “Goldilocks” region (not too hot, not too cold) around their host stars, that allows for liquid water to exist on the planetary surface. So we are not very special in that respect either. Third, astronomer Edwin Hubble showed that there exist galaxies other than the Milky Way. The most recent estimate of the number of galaxies in the observable universe gives the staggering number of two trillion. 

Even the stuff we are made of—ordinary (baryonic) matter—constitutes less than 5 percent of the cosmic energy budget, with the rest being in the form of dark matter—matter that does not emit or absorb light (constituting about 25 percent), and dark energy—a smooth form of energy that permeates all space (about 70 percent). And if all of that is not enough, in recent years, some theoretical physicists started speculating that even our entire universe may be but one member in a huge ensemble of universes—a multiverse (another scientific concept we ought to get used to). 

It seems, therefore, that the Copernican principle is operating on all scales, including the largest cosmological ones. In other words, everybody should be aware of the Copernican principle because it tells us that from a purely physical perspective we are just a speck of dust in the grand scheme of things. 

This state of affairs may sound depressing, but from a different point of view there is actually something extraordinarily uplifting in the above description. 

Notice that every step along the way of the increasing validity of the Copernican principle also represents a major human discovery. That is, each decrease in our physical significance was at the same time accompanied by a huge increase in our knowledge. The human mind expanded in this sense just as fast as the expansion of the known universe (or the multiverse). Copernican humility is therefore a good scientific principle to adopt, but at the same time we should keep our curiosity and passion for exploration alive and vibrant.

hugo_mercier's picture
Cognitive Scientist, French National Center for Scientific Research; Author, Not Born Yesterday

Do Christians believe that God is omniscient in the same way that they believe there’s a table in the middle of their living room?

On the one hand, we can refer to both attitudes using the same term—belief—and Christians would readily assent to both.

On the other hand, these two beliefs behave markedly differently.

The belief about the living room table is, so to speak, free to roam around our minds, guiding behavior (we must go around the table, we can put dishes on it) and our inferences (a child might use it as a hiding place, its size limits how many guests we can have for dinner).

By contrast, the belief about God’s omniscience seems more constrained. It guides some behaviors—for instance verbal behavior when quizzed on the subject—but not others. Believers in God’s omniscience might still try to hide actions or thoughts from God. They sometimes try to attract his attention. Believers in God’s omnipotence may still imagine God attending to prayers one after the other.

That people behave and draw a variety of inferences in a way that ignores or contradicts some of their beliefs is, to some extent, common sense, but it has also been experimentally demonstrated. This is true for a variety of religious beliefs, but also for many scientific beliefs. You have learned in school that the earth revolves around the sun but you may still think of the sun as rising in the east and setting in the west.

To help explain these apparent contradictions, Dan Sperber has introduced a distinction between intuitive and reflective beliefs. Intuitive beliefs are formed through simple perceptual and inferential processes. They can also be acquired through communication provided that the information that is communicated is of a kind that could have been acquired through simple perception and inference.  For instance, if someone tells you they have a table in their living room, you can form an intuitive belief about the table. Intuitive beliefs are the common stock of our minds, the basic data on which we rely to guide our behavior and inference in everyday life—as do many other animals.

However, humans are endowed with an extraordinary capacity to hold a variety of attitudes towards thoughts. You can believe that Bob is Canadian, but you can also doubt that Bob is Canadian, suppose that Bob is Canadian for the sake of an argument, attribute the belief that Bob is Canadian to someone else, and so on. Most of these attitudes toward a thought do not entail believing the thought—if you doubt that Bob is Canadian, you clearly do not believe that he is. Holding some of these attitudes toward a thought, however, amount to treating this thought as a belief of yours: for instance if you believe that there is a document proving that Bob is Canadian, or if you believe that Susan, who told you that Bob is Canadian, is to be trusted in this respect, then you have in mind compelling reasons to accept as true the thought that Bob is Canadian. At least initially, this thought occurs in your mind not as a free floating belief, but embedded in a higher order belief that justifies believing that Bob is Canadian. This makes your belief that Bob is Canadian reflective in Sperber’s sense. In such trivial cases, of course, you may disembed the thought that Bob is Canadian from the higher order belief that justifies it, and accept it as a plain intuitive belief free to roam around your mind. You may even forget how you initially came to know that Bob is Canadian.

In the same vein, if you are told by someone you trust in this respect that God is omniscient, you should come to hold yourself, in a reflective way, the belief that God is omniscient. However, by contrast with the case of Bob being Canadian, it’s not clear how you could turn the belief in God’s omniscience into an intuitive belief free to roam in your mind. The very idea of omniscience isn’t part of the standard furnishing of our minds; omniscience cannot be perceived, or inferred from anything we might perceive; there is nothing intuitive about it. When we think about agents, we think of them as having cognitive and sensory limitations, things they know and things they don’t know, things they can see and things they can’t see—because that’s how normal agents are. As a result, the belief in God’s omniscience is stuck in its position of reflective belief; it cannot be disembedded and turned into an intuitive belief. In this position, it is largely insulated from our ordinary inferences and from guiding mundane behavior.

If the belief in God’s omniscience is stuck in this reflective status, how can it still influence some of our actions? Through the intuitive belief it is embedded in: the belief that someone you trust in this respect believes God is omniscient. This higher order belief has been acquired through intuitive processes that calibrate our trust in others, and it can be used in guiding inferences and behaviors—for instance by making a Christian affirm and agree that God is omniscient.

The word “belief” collapses together at least two functionally different attitudes: intuitive and reflective beliefs. That some of our most cherished beliefs are reflective helps solve some apparent paradoxes, such as how people can hold contradictory beliefs, or ignore much of their beliefs in their actual practice. By drawing attention to the differences in the cognitive mechanisms that interact with intuitive and reflective beliefs—and the intuitive beliefs in which reflective beliefs are embedded—it also offers a more sophisticated and accurate picture of how our minds work.

clifford_pickover's picture
Author, The Math Book, The Physics Book, and The Medical Book trilogy

In the 2006 Taiwanese thriller movie Silk, a scientist creates a Menger Sponge, a special kind of hole-filled cube, to capture the spirit of a child. The Sponge not only functions as an anti-gravity device but seems to open a door into a new world. As fanciful as this film concept is, the Menger Sponge considered by mathematicians today is certainly beautiful to behold, when rendered using computer graphics, and a concept that ought to be more widely known. Certainly, it provides a wonderful gateway to fractals, mathematics, and reasoning beyond the limits of our own intuition.    

The Menger Sponge is a fractal object with an infinite number of cavities—a nightmarish object for any dentist to contemplate. The object was first described by Austrian mathematician Karl Menger in 1926. To construct the sponge, we begin with a mother cube and subdivide it into twenty-seven identical smaller cubes. Next, we remove the cube in the center and the six cubes that share faces with it. This leaves behind twenty cubes. We continue to repeat the process forever with smaller and smaller cubes. The number of cubes increases as 20n, where n is the number of iterations performed on the mother cube. The second iteration gives us 400 cubes, and by the time we get to the sixth iteration we have 64,000,000 cubes.       

Each face of the Menger Sponge is called a Sierpiński carpet. Fractal antennae based on the Sierpiński carpet are sometimes used as efficient receivers of electromagnetic signals. Both the carpets and entire cube have fascinating geometrical properties. For example, the sponge has infinite surface area while enclosing zero volume. Imagine the skeletal remains of an ancient dinosaur that has turned into the finest of dust through the gentle acid of time. What remains seems to occupy our world in a ghostlike fashion but no longer “fills” it.     

The Menger Sponge has a fractional dimension (technically referred to as the Hausdorff dimension) between a plane and a solid, approximately 2.73, and it has been used to visualize certain models of a foam-like space-time. Dr. Jeannine Mosely has constructed a Menger Sponge model from over 65,000 business cards that weighs about 150 pounds (70 kg).     

The Menger Sponge is an important concept for the general public to become familiar with partly because it reaffirms the idea that the line between mathematics and art can be a fuzzy one; the two are fraternal philosophies formalized by ancient Greeks like Pythagoras and Ictinus and dwelled on by such greats as Fra Luca Bartolomeo de Pacioli (1447–1517), the Italian mathematician and Franciscan friar, who published the first printed illustration of a Leonardo da Vinci’s rhombicuboctahedron, in De divina proportione. The rhombicuboctahedron, like the Menger Sponge, is a beauty to behold when rendered graphically—an Archimedean solid with eight triangular faces and eighteen square faces, with twenty-four identical vertices, and with one triangle and three squares meeting at each.     

Fractals, such as the Menger Sponge, often exhibit self-similarity, which suggests that various exact or inexact copies of an object can be found in the original object at smaller size scales. The detail continues for many magnifications—like an endless nesting of Russian dolls within dolls. Some of these shapes exist only in abstract geometric space, but others can be used as models for complex natural objects such as coastlines and blood vessel branching. The dazzling computer-generated images can be intoxicating, perhaps motivating students’ interest in math as much as any other mathematical discovery in the last century.     

The Menger Sponge reminds students, educators, and mathematicians of the need for computer graphics. As Peter Schroeder once wrote, “Some people can read a musical score and in their minds hear the music.... Others can see, in their mind’s eye, great beauty and structure in certain mathematical functions.... Lesser folk, like me, need to hear music played and see numbers rendered to appreciate their structures.”

matthew_d_lieberman's picture
Professor of Psychology, UCLA; Author, Social: Why Our Brains Are Wired to Connect

The comedian George Carlin once noted “that anyone driving slower than you is an idiot and anyone going faster than you is a maniac.” The obscure scientific term explaining why we see most people other than ourselves as unintelligent or crazy is naïve realism. Its origins trace back to at least the 1880s when philosophers used the term to suggest we ought to take our perceptions of the world at face value. In its modern incarnation, it has taken on almost the opposite meaning, with psychologist Lee Ross using the term to indicate that although most people take their perceptions of the world at face value, this is a profound error that regularly causes virtually unresolvable conflicts between people.

Imagine three drivers in Carlin’s world—Larry, Moe, and Curly. Larry is driving 30 MPH, Moe is driving 50 MPH, and Curly is driving 70 MPH. Larry and Curly agree that Moe’s driving was terrible, but are likely to come to blows over whether Moe is an idiot or a maniac. Meanwhile, Moe disagrees with both because it is obvious to him that Larry is an idiot (which Curly agrees with) and Curly is a maniac (which Larry agrees with). As in ordinary life, Larry, Moe, and Curly each fail to appreciate that their own understanding of the others is hopelessly tied to their own driving rather than reflecting something objective about the other person.

Naïve realism occurs as an unfortunate side effect of an otherwise adaptive aspect of brain function. Our remarkably sophisticated perceptual system performs its countless computations so rapidly that we are unaware of all the special effects teams working in the background to construct our seamless experience. We “see” so much more than is in front of us thanks to our brains automatically combining sensory input, with our expectations and motivations. This is why a bicycle that is partially hidden by a wall is instantly “seen” as a normal bicycle without a moment’s thought that it might only be part of a bicycle. Because these constructive processes happen behind the scenes of our mind, we have no idea this is happening and thus we mistake our perception for reality itself—a mistake we are often better off for having made.

When it comes to perceiving the physical world, we appear to mostly see things the same way. When confronted with trees, shoes, and gummy bears, our brains construct these things for us in similar enough ways that we can agree on which to climb, which to wear, and which to eat. But when we move to the social domain of understanding people and their interactions, our “seeing” is driven less by external input and more by expectation and motivation. Because our mental construction of the social world is just as invisible to us as our construction of the physical world, our idiosyncratic expectations and motivations are much more problematic in the social realm. In short, we are just as confident in our assessment of Donald Trump’s temperament and Hillary Clinton’s dishonesty as we are in our assessment of trees, shoes, and gummy bears. In both cases, we are quite certain that we are seeing reality for what it is.

And this is the real problem. This isn’t a heuristics and biases problem where our simplistic thinking can be corrected when we see the correct solution. This is about “seeing” reality. If I am seeing reality for what it is and you see it differently, then one of us has a broken reality detector and I know mine isn’t broken. If you can’t see reality as it is, or worse yet, can see it but refuse to acknowledge it, then you must be crazy, stupid, biased, lazy or deceitful.

In the absence of a thorough appreciation for how our brain ensures that we will end up as naïve realists, we can’t help but see complex social events differently from one another, with each of us denigrating the other for failing to see what is so obviously true. Although there are real differences that separate groups of people, naïve realism might be the most pernicious undetected source of conflicts and their durability. From Israel vs. Palestinians, to the American political left and right, to the fight over vaccines and autism—in each case our inability to appreciate our own miraculous construction of reality is preventing us from appreciating the miraculous construction of reality happening all around us.


william_poundstone's picture
Journalist; Author, How Do You Fight a Horse-Sized Duck?: Secrets to Succeeding at Interview Mind Games and Getting the Job You Want; Nominated twice for the Pulitzer Prize

Stigler’s law of eponymy says that no scientific discovery is named for its original discoverer. Notable examples include the Pythagorean theorem, Occam’s razor, Halley’s comet, Avogadro’s number, Coriolis force, Gresham’s law, Venn diagrams, Hubble’s law…

Statistician Stephen Stigler coined this law in a 1980 festschrift honoring sociologist Robert K. Merton. It was Merton who had remarked that original discovers never seem to get credit. Stigler playfully appropriated the rule, ensuring that Stigler’s law would be self-referential.

The generalization is not limited to science. Elbridge Gerry did not invent gerrymandering, nor Karl Baedeker the travel guide. Historians of rock music trace the lineage of the Bo Diddley beat, which didn’t originate with that bluesman. The globe is filled with place names honoring explorers who discovered places already well known to indigenous peoples (Hudson River, Hudson Bay; Columbia, District of Columbia and Columbus, Ohio; etc. Perhaps there are extraterrestrials who would consider the Magellanic Clouds a particularly egregious example).

Béchamel sauce is named for a once-famous gastronome, causing the rival Duke of Escars to complain: “That fellow Béchamel has all the luck! I was serving breast of chicken à la crème more than twenty years before he was born, but I have never had the chance of giving my name to even the most modest sauce.”

Stigler’s law is usually taken to be facetious, like Murphy’s law (which predates Edward A. Murphy, Jr., by the way). It is facetious in its absolutism. But it says something non-trivial about the nature of discovery and originality.

The naive take on Stigler's law is that the "wrong" people often get the credit. It's true that famous scientists, and others, sometimes get disproportionate credit relative to less famous colleagues. (That's actually a different law, the Matthew effect.)

What Stigler's law really tells us is that priority isn't everything. Edmund Halley's contribution was not in observing the 1682 comet but in recognizing that observations going back to 1531 (and millennia earlier, we now know) were of the same periodic comet. This claim would have made little sense before Newton's law of universal gravitation. Halley's achievement was developing the right idea at the right time, when the tools were available and the ambient culture was able to appreciate the result. Timeliness can matter as much as being first.

daniel_c_dennett's picture
Philosopher; Austin B. Fletcher Professor of Philosophy, Co-Director, Center for Cognitive Studies, Tufts University; Author, From Bacteria to Bach and Back

Psychologist James J. Gibson introduced the term affordance way back in the seventies. The basic idea is that the perceptual systems of any organism are designed to “pick up” the information that is relevant to its survival and ignore the rest. The relevant information is about opportunities  “afforded” by the furnishings of the world: holes afford hiding in, cups afford drinking out of, trees afford climbing (if you’re a child or a monkey or a bear, but not a lion or a rabbit), and so forth. Affordances make a nicely abstract category of behavioral options that can be guided by something other than blind luck—in other words, by information extracted from the world. Affordances are “what the environment offers the animal for good or ill,” according to Gibson, and “the information is in the light.” (Gibson, like most psychologists and philosophers of perception, concentrated on vision.)  While many researchers and theoreticians in the fledgling interdisciplinary field of cognitive science found Gibson’s basic idea of affordances compelling, Gibson and his more radical followers managed to create a cult-like aura around his ideas, repelling many otherwise interested thinkers.

The huge gap in Gibson’s perspective was his refusal even to entertain the question of how this “direct pickup” of information was accomplished by the brain. As a Gibsonian slogan put it, “it’s not what’s in your head; it’s what your head is in.” But as a revisionary description of the purpose of vision and the other senses, and a redirection of theorists’ attention from retinal images to three-dimensional optical arrays (the information is in the light) through which the organism moves, Gibson’s idea of affordances has much to recommend it—and we’ll do a better job of figuring out how the neural machinery does its jobs when we better understand the jobs assigned to it.

Mainly, Gibson helps us move away from the half-way-only theories of consciousness that see the senses as having completed their mission once they have created “a movie playing inside your head,” as David Chalmers has put it. There sometimes seems to be such a movie, but there is no such movie, and if there were, the task of explaining the mind would have to include explaining how this inner movie was perceived by the inner sense organs that then went about the important work of the mind: helping the organism discern and detect the available opportunities—the affordances—and acting appropriately on them. To identify an affordance is to have achieved access to a panoply of expectations that can be exploited, reflected upon (by us and maybe some other animals), used as generators of further reflections, etc. Consciousness is still a magnificent set of puzzles, but appears as less of a flatfooted mystery when we think about the fruits of cognition with Gibson’s help. The term is growing in frequency across the spectrum of cognitive science, but many users of the term seem to have a diminished appreciation of its potential.