2014 : WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT?

john_brockman's picture
Editor, Edge.org; Chairman of Brockman, Inc.; Author, By the Late John Brockman, The Third Culture

Edge.org was launched in 1996 as the online version of "The Reality Club" and as a living document on the Web to display the activities of "The Third Culture." 

THE REALITY CLUB

The Reality Club was an informal gathering of intellectuals who met from 1981 to 1996 in Chinese restaurants, artist lofts, investment banking firms, ballrooms, museums, living rooms and elsewhere. Reality Club members presented their work with the understanding that they will be challenged. The hallmark of The Reality club has been rigorous and sometimes impolite (but not ad hominem) discourse. The motto of the Club was inspired by the late artist-philosopher James Lee Byars: "To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves." 

I met Byars in 1969 when he sought me out after the publication of my first book, By the Late John Brockman. We were both in the art world, we shared an interest in language, in the uses of the interrogative, in avoiding the anesthesiology of wisdom, and in "the Steins"—Einstein, Gertrude Stein, Wittgenstein, and Frankenstein ("the shtick of the Steins"). In 1971, our dialogue ("Jimmie and Johnny"), informed the creation by James Lee of The World Question Center.

James Lee Byars (1932-1997), Founder of The World Question Center

I wrote the following about his project at the time of his death in Egypt in 1977:

James Lee inspired the idea that led to the Reality Club (and subsequently to Edge), and is responsible for the motto of the club. He believed that to arrive at an axiology of societal knowledge it was pure folly to go to a Widener Library and read 6 million volumes of books. (In this regard he kept only four books at a time in a box in his minimally furnished room, replacing books as he read them.) This led to his creation of the World Question Center in which he planned to gather the 100 most brilliant minds in the world together in a room, lock them behind closed doors, and have them ask each other the questions they were asking themselves.

The expected result, in theory, was to be a synthesis of all thought. But between idea and execution are many pitfalls. James Lee identified his 100 most brilliant minds (a few of them have graced the pages of this Site), called each of them, and asked what questions they were asking themselves. The result: 70 people hung up on him.

That was in 1971. New technologies = new perceptions. Email, the Web, mobile devices, social media, today allow for a serious implementation of Jimmy Lee's grand design. Though the venue is now online, the spirit of the Reality Club lives on in the lively back-and-forth discussions on the hot-button ideas driving the discussion today.

"To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves." 

As James Lee said: "To accomplish the extraordinary, you must seek extraordinary people." At the centre of every Edge project are remarkable people and remarkable minds— scientists, artists, philosophers, technologists and entrepreneurs who are at the center of today's intellectual, technological, and scientific landscape. They are representative of The Third Culture I wrote about in "The Emerging Third Culture," a 1991 essay, and a book, The Third Culture: Beyond the Scientific Revolution, published in 1995. 

THE THIRD CULTURE

The third culture consists of those scientists and other thinkers in the empirical world who, through their work and expository writing, are taking the place of the traditional intellectual in rendering visible the deeper meanings of our lives, redefining who and what we are. 

It is a large enough umbrella to also include the "digerati," the doers, thinkers, and writers, connected in ways they may not even appreciate, who have tremendous influence on the emerging communication revolution surrounding the growth of the Internet and the Web.

Edge is a living document on the Web that displays "the third culture" in action. The "content" of Edge is the group of people who connect in this way. Edge is a conversation.

The ideas presented on Edge are speculative; they represent the frontiers of knowledge in the areas of evolutionary biology, genetics, computer science, neurophysiology, psychology, and physics. Some of the fundamental questions posed are: Where did the universe come from? Where did life come from? Where did the mind come from? Emerging out of the third culture is a new natural philosophy, founded on the realization of the import of complexity, of evolution. Very complex systems,, whether organisms, brains, the biosphere, or the universe itself, were not constructed by design; all have evolved.

There is a new set of metaphors to describe ourselves, our minds, the universe, and all of the things we know in it, and it is the intellectuals with these new ideas and images, those scientists and others doing things and writing their own books, who drive our times.

The third culture consists of those scientists and other thinkers in the empirical world who, through their work and expository writing, are taking the place of the traditional intellectual in rendering visible the deeper meanings of our lives, redefining who and what we are. 

Through the years, Edge has had a simple criterion for choosing contributors. We look for people whose creative work has expanded our notion of who and what we are. A few are bestselling authors or are famous in the mass culture. Most are not. Rather, we encourage work on the cutting edge of the culture, and the investigation of ideas that have not been generally exposed. We are interested in "thinking smart"; we are not interested in received "wisdom." In communications theory information is not defined as data or input but rather as "a difference that makes a difference.'' It is this level we hope our contributors will achieve.

Edge encourages people who can take the materials of the culture in the arts, literature, and science and put them together in their own way. We live in a mass-produced culture where many people, even many established cultural arbiters limit themselves to secondhand ideas, thoughts, and opinions. Edge consists of individuals who create their own reality and do not accept an ersatz, appropriated reality. The Edge community consists of people who are out there doing it rather than talking about and analyzing the people who are doing it.

Edge bears resemblance to the early seventeenth-century Invisible College, a precursor to the Royal Society. Its members consisted of scientists such as Robert Boyle, John Wallis, and Robert Hooke. The Society's common theme was to acquire knowledge through experimental investigation. Another inspiration is The Lunar Society of Birmingham, an informal club of the leading cultural figures of the new industrial age — James Watt, Erasmus Darwin, Josiah Wedgwood, Joseph Priestley, and Benjamin Franklin. While different than the Algonquin Roundtable or Bloomsbury Group, Edge offers the same quality of intellectual adventure. 

In the words of the novelist Ian McEwan, edge.org is "open-minded, free-ranging, intellectually playful… an unadorned pleasure in curiosity, a collective expression of wonder at the living and inanimate world… an ongoing and thrilling colloquium." 

At the end of the year in 1999, for the first anniversary edition of Edge, I asked a number of the "Edgies" to use the interrogative. I asked "the most subtle sensibilities in the world what question they are asking themselves." We've been doing it annually ever since. 

I work with three of the original Edgies who year in and year out provide the core sounding board for the ideas and information we present to the public. I refer to them in private correspondence as "The Council." Every year, beginning late summer, I consult with Stewart Brand, Kevin Kelly, and George Dyson, and together we create the Edge Annual Question that Edge has been asking for the past fourteen years.

Stewart Brand is the founder and editor of Whole Earth Catalog and author of Whole Earth Discipline; Kevin Kelly helped to launch Wired in 1993 and is the author of Out of Control and What Technology Wants; and George Dyson, a science historian, is the author of Darwin Among the Machines and Turing's Cathedral. This year, Laurie Santos, Associate Prefessor of Psychology, and Director of the Comparative Cognition Library at Yale, became involved, adding to the mix her keen intellect as well as a wide range of contacts among the leading thinkers of her generation.


George Dyson
, Stewart Brand, John Brockman, Kevin Kelly

It's not easy coming up with a question. James Lee, used to say: "I can answer the question, but am I bright enough to ask it?" We are looking for questions that inspire answers we can't possibly predict. Our goal is to provoke people into thinking thoughts that they normally might not have.

We pay a lot of attention to framing the question and soliciting early responses from individuals who can set a high bar. This is critical. These responses seed the site and challenge and encourage the wider group to think in surprising ways. The conversation goes on, and on, for weeks, then months, as we widen the circle and invite more Edgies in to test the question under consideration and hear new ideas. Twice, at the very last minute, an idea pops up, so obviously right that we scrap months of work and just go with it.

This was the case with research psychologist Steven Pinker's 2012 question: "What Is Your Favorite Deep, Elegant, Or Beautiful Explanation?", and with theoretical psychologist Nicholas Humphrey's 2005 question "What Do You Believe Is True Even Though You Cannot Prove It?",  which earned him the title of "Edge Question Laureate", and about which BBC4 Radio noted: "Fantastically stimulating...It's like the crack cocaine of the thinking world.... Once you start, you can't stop thinking about that question."

The online publication of the annual question occurs in mid-January, and in recent years it is followed in a little more than a year by a printed book. Last year we worried about worrying by asking"What Should We Be Worried About?". This year's question comes out of "HeadCon 13" What's New In Social Science", a two-day Edge seminar that took place in September last year. At one point, psychologist Laurie Santos mentioned to the group that she was interested in why there was no mechanism in social science for retiring ideas in order to make room for new initiatives.

A lively discussion followed and it became quickly picked that Santos was on to a possible Edge Question. After two weeks of intense conversations, several Edgies expressed concern that the danger with a question about retiring ideas is that the responses could go negative and that some might see it as an invitation to trash rivals. Others pointed out that this is the case every year,  that no matter what question is asked. We decided to go with the question after one Edgie emailed the following comment which tipped the balance: "Science is argument, not advertising."

I am pleased to present the Edge Question 2014, asked by Laurie Santos.


geoffrey_west's picture
Theoretical Physcicist; Shannan Distinguished Professor and Past President, Santa Fe Institute; Author, Scale: The Universal Laws of Growth, Innovation, Sustainability, and the Pace of Life in Organisms, Cities, Economies, and Companies

Everything? Well, wait a minute. Questioning A Theory of Everything may be beating a dead horse since I’m certainly not the first to be bothered by its implicit hyperbole but let’s face it, referring to one’s field of study as The Theory of Everything smacks of arrogance and naivité. Although it’s only been around for a relatively short period and may already be dying a natural death, the phrase, though certainly not the endeavour, should be retired from serious scientific literature and discourse.

Let me elaborate. The search for grand syntheses, for commonalities, regularities, ideas and concepts that transcend the narrow confines of specific problems or disciplines is one of the great inspirational drivers of science and scientists. Arguably, it is also a defining characteristic of homo sapiens sapiens. Perhaps the binomial form of sapiens is some distorted poetic recognition of this. Like the invention of gods and God, the concept of A Theory of Everything connotes the grandest vision of all, the inspiration of all inspirations, namely that we can encapsulate and understand the entirety of the universe in a small set of precepts, in this case, a concise set of mathematical equations. Like the concept of God, however, it is potentially misleading and intellectually dangerous.

Among the classic grand syntheses in science are Newton’s laws, that taught us that heavenly laws were no different than the earthly, Maxwell’s unification of electricity and magnetism, that brought the ephemeral aether into our lives, Darwin’s theory of natural selection, which reminded us that we’re just animals and plants after all, and the laws of thermodynamics that suggest we can’t go on forever. Each of these has had profound consequences not only in changing the way we think about the world, but also in laying the foundations for technological advancements that have led to the standard of living many of us are privileged to enjoy. Nevertheless, they are all, to varying degrees, incomplete. Indeed, understanding the boundaries of their applicability, the limits to their predictive power and the ongoing search for exceptions, violations and failures have provoked even deeper questions and challenges, stimulating the continued progress of science and the unfolding of new ideas, techniques and concepts.

One of the great ongoing scientific challenges is the search for a Grand Unified Theory of the elementary particles and their interactions, including its extension to understanding the cosmos and even the origin of space-time itself. Such a theory would be based on a parsimonious set of underlying mathematisable universal principles that integrate and explain all the fundamental forces of nature from gravity and electromagnetism to the weak and strong nuclear forces, incorporating Newton’s laws, quantum mechanics and general relativity. Fundamental quantities like the speed of light, the dimensionality of space-time and masses of the elementary particles would all be predicted, and the equations governing the origin and evolution of the universe through to the formation of galaxies and beyond would be derived, and so on. This constitutes The Theory of Everything. It is a truly remarkable and enormously ambitious quest that has occupied thousands of researchers for over fifty years at a cost of billions of dollars. Measured by almost any metric this quest, which is still far from its ultimate goal, has been enormously successful, leading, for example, to the discovery of quarks and the Higgs, to black holes and the Big Bang, to quantum chromodynamics and string theory…..and to many Nobel Prizes.

But Everything? Well, hardly. Where’s life, where are animals and cells, brains and consciousness, cities and corporations, love and hate, etc, etc? How does the extraordinary diversity and complexity seen here on earth arise? The simplistic answer is that these are inevitable outcomes of the interactions and dynamics encapsulated in the Theory. Time evolves from the geometry and dynamics of strings, the universe expands and cools, and the hierarchy from quarks to nucleons, to atoms and molecules, to cells, brains, and emotions and all the rest come tumbling out; a sort of deus ex machina, a result of "just" turning the crank of increasingly complicated equations and computations presumed, in principle, to be soluble to any sufficient degree of accuracy. Qualitatively, this extreme version of reductionism may have some validity, but Something is missing.

The "Something" includes concepts like information, emergence, accidents, historical contingency, adaptation and selection, all characteristics of complex adaptive systems whether organisms, societies, ecosystems or economies. These are composed of myriad individual constituents or agents that take on collective characteristics that are generally unpredictable, certainly in detail, from their underlying components even if the interactive dynamics are known. Unlike the Newtonian paradigm upon which The Theory of Everything is based, the complete dynamics and structure of complex adaptive systems cannot be encoded in a small number of equations. Indeed, in most cases, probably not even in an infinite number! Furthermore, predictions to arbitrary degrees of accuracy are not possible, even in principle.

Perhaps, then, the most surprising consequence of a visionary Theory of Everything is that it implies that, on the grand scale, the universe, including its origins and evolution, though extremely complicated, is not complex but, in fact, is surprisingly simple since it can be encoded in a limited number of equations, conceivably only one. This is in stark contrast to here on earth where we are integral to some of the most diverse, complex and messy phenomena that occur anywhere in the universe, and which require additional, possibly non-mathematisable concepts, to understand. So, while applauding and admiring the search for a Grand Unified Theory of all the basic forces of nature, let’s drop the implication that it can, in principle, explain and predict Everything. Let us instead incorporate a parallel quest for A Grand Unified Theory of Complexity. The challenge of developing a quantitative, analytic, principled, predictive framework for understanding complex adaptive systems is surely a grand challenge for the 21st Century. Like all grand syntheses, it will inevitably remain incomplete but nevertheless will undoubtedly inspire significant, possibly revolutionary, new ideas, concepts, and techniques.

geoffrey_west's picture
Theoretical Physcicist; Shannan Distinguished Professor and Past President, Santa Fe Institute; Author, Scale: The Universal Laws of Growth, Innovation, Sustainability, and the Pace of Life in Organisms, Cities, Economies, and Companies

Everything? Well, wait a minute. Questioning A Theory of Everything may be beating a dead horse since I’m certainly not the first to be bothered by its implicit hyperbole but let’s face it, referring to one’s field of study as The Theory of Everything smacks of arrogance and naivité. Although it’s only been around for a relatively short period and may already be dying a natural death, the phrase, though certainly not the endeavour, should be retired from serious scientific literature and discourse.

Let me elaborate. The search for grand syntheses, for commonalities, regularities, ideas and concepts that transcend the narrow confines of specific problems or disciplines is one of the great inspirational drivers of science and scientists. Arguably, it is also a defining characteristic of homo sapiens sapiens. Perhaps the binomial form of sapiens is some distorted poetic recognition of this. Like the invention of gods and God, the concept of A Theory of Everything connotes the grandest vision of all, the inspiration of all inspirations, namely that we can encapsulate and understand the entirety of the universe in a small set of precepts, in this case, a concise set of mathematical equations. Like the concept of God, however, it is potentially misleading and intellectually dangerous.

Among the classic grand syntheses in science are Newton’s laws, that taught us that heavenly laws were no different than the earthly, Maxwell’s unification of electricity and magnetism, that brought the ephemeral aether into our lives, Darwin’s theory of natural selection, which reminded us that we’re just animals and plants after all, and the laws of thermodynamics that suggest we can’t go on forever. Each of these has had profound consequences not only in changing the way we think about the world, but also in laying the foundations for technological advancements that have led to the standard of living many of us are privileged to enjoy. Nevertheless, they are all, to varying degrees, incomplete. Indeed, understanding the boundaries of their applicability, the limits to their predictive power and the ongoing search for exceptions, violations and failures have provoked even deeper questions and challenges, stimulating the continued progress of science and the unfolding of new ideas, techniques and concepts.

One of the great ongoing scientific challenges is the search for a Grand Unified Theory of the elementary particles and their interactions, including its extension to understanding the cosmos and even the origin of space-time itself. Such a theory would be based on a parsimonious set of underlying mathematisable universal principles that integrate and explain all the fundamental forces of nature from gravity and electromagnetism to the weak and strong nuclear forces, incorporating Newton’s laws, quantum mechanics and general relativity. Fundamental quantities like the speed of light, the dimensionality of space-time and masses of the elementary particles would all be predicted, and the equations governing the origin and evolution of the universe through to the formation of galaxies and beyond would be derived, and so on. This constitutes The Theory of Everything. It is a truly remarkable and enormously ambitious quest that has occupied thousands of researchers for over fifty years at a cost of billions of dollars. Measured by almost any metric this quest, which is still far from its ultimate goal, has been enormously successful, leading, for example, to the discovery of quarks and the Higgs, to black holes and the Big Bang, to quantum chromodynamics and string theory…..and to many Nobel Prizes.

But Everything? Well, hardly. Where’s life, where are animals and cells, brains and consciousness, cities and corporations, love and hate, etc, etc? How does the extraordinary diversity and complexity seen here on earth arise? The simplistic answer is that these are inevitable outcomes of the interactions and dynamics encapsulated in the Theory. Time evolves from the geometry and dynamics of strings, the universe expands and cools, and the hierarchy from quarks to nucleons, to atoms and molecules, to cells, brains, and emotions and all the rest come tumbling out; a sort of deus ex machina, a result of "just" turning the crank of increasingly complicated equations and computations presumed, in principle, to be soluble to any sufficient degree of accuracy. Qualitatively, this extreme version of reductionism may have some validity, but Something is missing.

The "Something" includes concepts like information, emergence, accidents, historical contingency, adaptation and selection, all characteristics of complex adaptive systems whether organisms, societies, ecosystems or economies. These are composed of myriad individual constituents or agents that take on collective characteristics that are generally unpredictable, certainly in detail, from their underlying components even if the interactive dynamics are known. Unlike the Newtonian paradigm upon which The Theory of Everything is based, the complete dynamics and structure of complex adaptive systems cannot be encoded in a small number of equations. Indeed, in most cases, probably not even in an infinite number! Furthermore, predictions to arbitrary degrees of accuracy are not possible, even in principle.

Perhaps, then, the most surprising consequence of a visionary Theory of Everything is that it implies that, on the grand scale, the universe, including its origins and evolution, though extremely complicated, is not complex but, in fact, is surprisingly simple since it can be encoded in a limited number of equations, conceivably only one. This is in stark contrast to here on earth where we are integral to some of the most diverse, complex and messy phenomena that occur anywhere in the universe, and which require additional, possibly non-mathematisable concepts, to understand. So, while applauding and admiring the search for a Grand Unified Theory of all the basic forces of nature, let’s drop the implication that it can, in principle, explain and predict Everything. Let us instead incorporate a parallel quest for A Grand Unified Theory of Complexity. The challenge of developing a quantitative, analytic, principled, predictive framework for understanding complex adaptive systems is surely a grand challenge for the 21st Century. Like all grand syntheses, it will inevitably remain incomplete but nevertheless will undoubtedly inspire significant, possibly revolutionary, new ideas, concepts, and techniques.

geoffrey_west's picture
Theoretical Physcicist; Shannan Distinguished Professor and Past President, Santa Fe Institute; Author, Scale: The Universal Laws of Growth, Innovation, Sustainability, and the Pace of Life in Organisms, Cities, Economies, and Companies

Everything? Well, wait a minute. Questioning A Theory of Everything may be beating a dead horse since I’m certainly not the first to be bothered by its implicit hyperbole but let’s face it, referring to one’s field of study as The Theory of Everything smacks of arrogance and naivité. Although it’s only been around for a relatively short period and may already be dying a natural death, the phrase, though certainly not the endeavour, should be retired from serious scientific literature and discourse.

Let me elaborate. The search for grand syntheses, for commonalities, regularities, ideas and concepts that transcend the narrow confines of specific problems or disciplines is one of the great inspirational drivers of science and scientists. Arguably, it is also a defining characteristic of homo sapiens sapiens. Perhaps the binomial form of sapiens is some distorted poetic recognition of this. Like the invention of gods and God, the concept of A Theory of Everything connotes the grandest vision of all, the inspiration of all inspirations, namely that we can encapsulate and understand the entirety of the universe in a small set of precepts, in this case, a concise set of mathematical equations. Like the concept of God, however, it is potentially misleading and intellectually dangerous.

Among the classic grand syntheses in science are Newton’s laws, that taught us that heavenly laws were no different than the earthly, Maxwell’s unification of electricity and magnetism, that brought the ephemeral aether into our lives, Darwin’s theory of natural selection, which reminded us that we’re just animals and plants after all, and the laws of thermodynamics that suggest we can’t go on forever. Each of these has had profound consequences not only in changing the way we think about the world, but also in laying the foundations for technological advancements that have led to the standard of living many of us are privileged to enjoy. Nevertheless, they are all, to varying degrees, incomplete. Indeed, understanding the boundaries of their applicability, the limits to their predictive power and the ongoing search for exceptions, violations and failures have provoked even deeper questions and challenges, stimulating the continued progress of science and the unfolding of new ideas, techniques and concepts.

One of the great ongoing scientific challenges is the search for a Grand Unified Theory of the elementary particles and their interactions, including its extension to understanding the cosmos and even the origin of space-time itself. Such a theory would be based on a parsimonious set of underlying mathematisable universal principles that integrate and explain all the fundamental forces of nature from gravity and electromagnetism to the weak and strong nuclear forces, incorporating Newton’s laws, quantum mechanics and general relativity. Fundamental quantities like the speed of light, the dimensionality of space-time and masses of the elementary particles would all be predicted, and the equations governing the origin and evolution of the universe through to the formation of galaxies and beyond would be derived, and so on. This constitutes The Theory of Everything. It is a truly remarkable and enormously ambitious quest that has occupied thousands of researchers for over fifty years at a cost of billions of dollars. Measured by almost any metric this quest, which is still far from its ultimate goal, has been enormously successful, leading, for example, to the discovery of quarks and the Higgs, to black holes and the Big Bang, to quantum chromodynamics and string theory…..and to many Nobel Prizes.

But Everything? Well, hardly. Where’s life, where are animals and cells, brains and consciousness, cities and corporations, love and hate, etc, etc? How does the extraordinary diversity and complexity seen here on earth arise? The simplistic answer is that these are inevitable outcomes of the interactions and dynamics encapsulated in the Theory. Time evolves from the geometry and dynamics of strings, the universe expands and cools, and the hierarchy from quarks to nucleons, to atoms and molecules, to cells, brains, and emotions and all the rest come tumbling out; a sort of deus ex machina, a result of "just" turning the crank of increasingly complicated equations and computations presumed, in principle, to be soluble to any sufficient degree of accuracy. Qualitatively, this extreme version of reductionism may have some validity, but Something is missing.

The "Something" includes concepts like information, emergence, accidents, historical contingency, adaptation and selection, all characteristics of complex adaptive systems whether organisms, societies, ecosystems or economies. These are composed of myriad individual constituents or agents that take on collective characteristics that are generally unpredictable, certainly in detail, from their underlying components even if the interactive dynamics are known. Unlike the Newtonian paradigm upon which The Theory of Everything is based, the complete dynamics and structure of complex adaptive systems cannot be encoded in a small number of equations. Indeed, in most cases, probably not even in an infinite number! Furthermore, predictions to arbitrary degrees of accuracy are not possible, even in principle.

Perhaps, then, the most surprising consequence of a visionary Theory of Everything is that it implies that, on the grand scale, the universe, including its origins and evolution, though extremely complicated, is not complex but, in fact, is surprisingly simple since it can be encoded in a limited number of equations, conceivably only one. This is in stark contrast to here on earth where we are integral to some of the most diverse, complex and messy phenomena that occur anywhere in the universe, and which require additional, possibly non-mathematisable concepts, to understand. So, while applauding and admiring the search for a Grand Unified Theory of all the basic forces of nature, let’s drop the implication that it can, in principle, explain and predict Everything. Let us instead incorporate a parallel quest for A Grand Unified Theory of Complexity. The challenge of developing a quantitative, analytic, principled, predictive framework for understanding complex adaptive systems is surely a grand challenge for the 21st Century. Like all grand syntheses, it will inevitably remain incomplete but nevertheless will undoubtedly inspire significant, possibly revolutionary, new ideas, concepts, and techniques.

anton_zeilinger's picture
Nobel laureate (2022 - Physics); Physicist, University of Vienna; Scientific Director, Institute of Quantum Optics and Quantum Information; President, Austrian Academy of Sciences; Author, Dance of the Photons: From Einstein to Quantum Teleportation

The idea to be abandoned is the idea that there is no reality in the quantum world. The idea probably came about because of two reasons. On the one hand, because of the fact that one cannot always ascribe a precise value to a physical property, and on the other hand, because within the wide spectrum of interpretations of quantum mechanics some suggest that the quantum state does not describe an external reality, but rather that the properties only come about in the mind of the observer and therefore that consciousness plays a crucial role.

Let us consider for a second the famous double-slit experiment. Such experiments or their equivalents have to date not only been performed with single photons or any other kind of single particles, like neutrons, protons, electrons etc., but even with very large macromolecules, such as buckyballs and even larger. Specifically we do the experiment with buckyballs—the C-60 or C-70 molecules. You have two slits and under the right experimental conditions, you observe a distribution of the buckyballs behind the slits which has maxima and minima, the interference pattern. This is due to interference of the probability waves passing through both slits. But, following Einstein in his famous debate with Niels Bohr, we might ask if we do the experiment with individual particles, individual buckballs one by one: Through which slit does an individual buckyball molecule pass? Would it not be natural to assume that every particle has to pass either slit? Quantum physics tells us that this is not a meaningful question. We cannot assign a well-defined position to the particle unless we actually perform an experiment which allows us to find out where it is. So, before we do the measurement, the position of the buckyball—and therefore the slit it passes through—is a concept devoid of any meaning. 

Suppose we now measure the position of the particle. Then we get an answer and know where it is. It is either near one slit or near the other slit. In that case, position is certainly an element of reality, and we can clearly say that quantum physics describes this reality. What is interesting is that having precise knowledge of one feature, namely the position, another kind of knowledge, namely the one encoded in the interference pattern, is not well-defined anymore. 

Where could consciousness come in here? Quantum mechanics tells us that the particle, before any observation, is in a superposition of passing through one slit and of passing through the other slit. If we now have two detectors, one each behind each slit, then either detector will register the particle. But quantum mechanics tells us that the measurement apparatus becomes entangled with the position observable of the particle, and thus itself does not have well-defined classical features, at least in principle. This, following the Hungarian-American Nobel prize winner Eugene Wigner, is a chain which can be followed until an observer registers the result. So if we would adopt that reasoning, it is the consciousness which would make reality happen. 

But you don't have to go so far. It is enough to assume that quantum mechanics just describes probabilities of possible measurement results. Then making an observation turns potentiality into actuality and, in our case, the position of the particle becomes a quantity one can talk reasonably about. But, whether it has a well-defined position or not, the buckyball very well exists. It is real in the double-slit experiment, even when it is impossible to assign its position a well-defined value.

eric_r_weinstein's picture
Mathematician and Economist; Managing Director of Thiel Capital

If one views science as an economist, it would stand to reason that the scientific theory that should be first retired would be the one that offers the greatest opportunity for arbitrage in the market place of ideas. Thus it is not sufficient to look for ideas which are merely wrong, as we should instead look for troubled scientific ideas that block progress by inspiring zeal, devotion, and what biologists politely term 'interference competition' all out of proportion to their history of achievement. Here it is hard to find a better candidate for an intellectual bubble than that which has formed around the quest for a consistent theory of everything physical, reinterpreted as if it were synonymous with 'quantum gravity'. If nature were trying to send a polite message that there is other preliminary work to be done first before we quantize gravity, it is hard to see how she could send a clearer message than dashing the Nobel dreams for two successive generations of Bohr's brilliant descendants.

To recall, modern physics rests on a stool with three classical geometric legs first fashioned individually by Einstein, Maxwell, and Dirac. The last two of those legs can be together retrofitted to a quantum theory of force and matter known as the 'standard model', while the first stubbornly resists any such attempt at an upgrade, rendering the semi-quantum stool unstable and useless. It is from this that the children of Bohr have derived the need to convert the children of Einstein to the quantum religion at all costs so that the stool can balance.

But, to be fair to those who insist that Einstein must be now made to bow to Bohr, the most strident of those enthusiasts have offered a fair challenge. Quantum exceptionalists claim, despite an unparalleled history of non-success, that string theory (now rebranded as M-theory for matrix, magic or membrane) remains literally 'the only game in town' because fundamental physics has gotten so hard that no one can think of a credible alternate unification program.  If we are to dispel this as a canard, we must make a good faith effort to answer the challenge by providing interesting alternatives, lest we be left with nothing at all.

My reason for believing that there is a better route to the truth is that we have, out of what seems to be misplaced love for our beloved Einstein, been too reverential to the exact form of general relativity. For example, if before retrofitting we look closely at the curvature and geometry of the legs, we can see something striking, in that they are subtly incompatible at a classical geometric level before any notion of a quantum is introduced. Einstein's leg seems the sparest and sturdiest as it clearly shows the attention to function found in the school of 'intrinsic geometry' founded by the German Bernhard Riemann. The Maxwell and Dirac legs are somewhat more festive and ornamented as they explore the freedom of form which is the raison d'etre for a more whimsical school of 'auxiliary geometry' pioneered by Alsatian Charles Ehresmann. This leads one naturally to a very different question: what if the quantum incompatibility of the existing theories is really a red herring with respect to unification and the real sticking point is a geometric conflict between the mathematicians Ehresmann and Riemann rather than an incompatibility between the physicists Einstein and Bohr? Even worse, it could be that none of the foundations are ready to be quantized. What if all three theories are subtly incomplete at a geometric level and that the quantum will follow once, and only once, all three are retired and replaced with a unified geometry?

If such an answer exists, it cannot be expected to be a generic geometric theory as all three of the existing theories are each, in some sense, the simplest possible in their respective domains. Such a unified approach might instead involve a new kind of mathematical toolkit combining elements of the two major geometric schools, which would only be relevant to physics if the observed world can be shown to be of a very particular subtype. Happily, with the discoveries of neutrino mass, non-trivial dark energy, and dark matter, the world we see looks increasingly to be of the special class that could accommodate such a hybrid theory.

One could go on in this way, but it is not the only interesting line of thinking. While, ultimately, there may be a single unified theory to summit, there are few such intellectual peaks that can only be climbed from one face. We thus need to return physics to its natural state of individualism so that independent researchers need not fear large research communities who, in the quest for mindshare and resources, would crowd out isolated rivals pursuing genuinely interesting inchoate ideas that head in new directions.  Unfortunately it is difficult to responsibly encourage theorists without independent wealth to develop truly speculative theories in a community which has come to apply artificially strict standards to new programs and voices while letting M-theory stand, year after year, for mulligan and mañana.

Established string theorists may, with a twinkle in the eye, shout, 'predictions!', 'falsifiability!' or 'peer review!' at younger competitors in jest. Yet potentially rival 'infant industry' research programs, as the saying goes, do not die in jest but in earnest. Given the history of scientific exceptionalism surrounding quantum gravity research, it is neither desirable nor necessary to retire M-theory explicitly, as it contains many fascinating ideas. Instead, one need only insist that the training wheels that were once customarily circulated to new entrants to reinvigorate the community, be transferred to emerging candidates from those who have now monopolized them for decades at a time. We can then wait at long last to see if 'the only game in town', when denied the luxury of special pleading by senior boosters, has the support from nature to stay upright.  

andrian_kreye's picture
Editor-at-large of the German Daily Newspaper, Sueddeutsche Zeitung, Munich

 

Gordon Moore's 1965 paper stating that the number of transistors on integrated circuits will double every two years has become the most popular scientific analogy of the digital age. Despite being a mere conjecture it has become the go-to model to frame complex progress in a simple formula. There are good technological reasons to retire Moore's Law. For example the general consensus that Moore's Law will effectively cease to exist past a transistor size smaller than 5 nanometers. That would mean a peak and sharp drop-off in ten to twenty years. Another one is the potential of quantum computers pushing computing into new realms, expected to become reality in three to five years. But Moore's Law should be retired before its technological limits, because it has propelled the perception of progress into wrong directions. Allowing its end to become an event would just amplify the errors of reasoning.

First and foremost Moore's Law has allowed to perceive the development of the digital era as a linear narrative. The simple curve of progression is the digital equivalent of the ancient wheat and chessboard problem (with a potentially infinite chessboard). Like the Persian inventor of the game of chess who demanded from the king a geometric progression of grains all across the board, digital technology seems to develop exponentially. This model ignores the parallel nature of digital progress, which encompasses not only technological or economic development, but scientific, social and political change. Changes that can rarely be quantified.

Still the Moore's law model of perception has already found its way into the narrative of biotechnological history, where change become ever more complex. Proof of progress is claimed in the simplistic reasoning of a sharp decline in cost for sequencing a human genome from three billion Dollars in the year 2000 to the August 2013 cancellation of the Genomics X Prize for the first 1000 Dollar genome, because the challenge had been outpaced by innovation.

For both digital and biotechnical history the linear narrative has been insufficient. The prowess of the integrated circuit has been the technological spark to induce a massive development comparable with the wheel allowing the rise of urban society. Both technologies have been perfected over time, but their technological refinement falls short to illustrate the impact both had.

It is about 25 years ago that scientists at MIT's media lab told me about a paradigmatic change in computer technology. In the future, they said, the number of other computers connected to a computer will be more important than its number of transistors on its integrated circuits. For a writer interested but not part of the forefront of computer technology that was still groundbreaking news in 1988. A few years later the demo of a Mosaic browser was as formative as listening to the first Beatles record and seeing the first man on the moon had been for my parents.

Change since have been so multilayered, interconnected and rapid that comprehension has lagged behind ever since. Scientific, social and political changes occur in random patterns. Results have been mixed in equally random patterns. The slowdown of the music industry and media has not been matched in the publishing industry and film. The failed twitter revolution of Iran had quite a few things in common with the Arab spring, but even in the Maghreb the results differed wildly. Social networks have impacted societies sometimes in exact opposites—while the fad of social networks have resulted in cultural and isolation in Western society, it has created a counterforce of collective communication against the strategies of the Chinese party apparatus to isolate its citizenry from within.

Most of these phenomena have been only observed, not explained by now. It is mostly in hindsight that a linear narrative is constructed, if not imposed on. The inability of many of the greatest digital innovations like viral videos or social networks to be monetized are just one of many proofs how difficult it is to get a comprehensive grasp on digital history. Moore's law and its numerous popular applications to other fields of progress thus create an illusion of predictability in the least predictable of all fields—the course of history.

These errors of reasoning will be amplified, if Moore's Law is allowed to come to its natural end. Peak theories have become the lore of cultural pessimism. If Moore's law is allowed to become a finite principle, digital progress will be perceived as a linear progression towards a peak and an end. Neither will become a reality, because the digital is not a finite resource, but an infinite realm of mathematical possibilities reaching out into the analog world of sciences, society, economics and politics. Because this progress has ceased to depend on quantifiable basis and on linear narratives it will not be brought to a halt, not even slowed down, if one of its strains comes to an end.

In 1972 the wheat and chessboard problem became the mythological basis for the Club of Rome's Malthusian „The Limits to Growth". Moore's will create the disillusionment of a finite nature of the digital. It will become as popular as its illusion of predictability. After all there have bee no loonies carrying signs saying "The End is Not Near".

Gordon Moore's 1965 paper stating that the number of transistors on integrated circuits will double every two years has become the most popular scientific analogy of the digital age. Despite being a mere conjecture it has become the go-to model to frame complex progress in a simple formula. There are good technological reasons to retire Moore's Law. For example the general consensus that Moore's Law will effectively cease to exist past a transistor size smaller than 5 nanometers. That would mean a peak and sharp drop-off in ten to twenty years. Another one is the potential of quantum computers pushing computing into new realms, expected to become reality in three to five years. But Moore's Law should be retired before it's technological limits, because it has propelled the perception of progress into wrong directions. Allowing it's end to become an event would just amplify the errors of reasoning.

First and foremost Moore's Law has allowed to perceive the development of the digital era as a linear narrative. The simple curve of progression is the digital equivalent of the ancient wheat and chessboard problem (with a potentially infinite chessboard). Like the Persian inventor of the game of chess who demanded from the king a geometric progression of grains all across the board, digital technology seems to develop exponentially. This model ignores the parallel nature of digital progress, which encompasses not only technological or economic development, but scientific, social and political change. Changes that can rarely be quantified.

Still the Moore's law model of perception has already found it's way into the narrative of biotechnological history, where change become ever more complex. Proof of progress is claimed in the simplistic reasoning of a sharp decline in cost for sequencing a human genome from three billion Dollars in the year 2000 to the August 2013 cancellation of the Genomics X Prize for the first 1000 Dollar genome, because the challenge had been outpaced by innovation.

For both digital and biotechnical history the linear narrative has been insufficient. The prowess of the integrated circuit has been the technological spark to induce a massive development comparable with the wheel allowing the rise of urban society. Both technologies have been perfected over time, but their technological refinement falls short to illustrate the impact both had.

It is about 25 years ago that scientists at MIT's media lab told me about a paradigmatic change in computer technology. In the future, they said, the number of other computers connected to a computer will be more important than it's number of transistors on it's integrated circuits. For a writer interested but not part of the forefront of computer technology that was still groundbreaking news in 1988. A few years later the demo of a Mosaic browser was as formative as listening to the first Beatles record and seeing the first man on the moon had been for my parents.

Change since have been so multilayered, interconnected and rapid that comprehension has lagged behind ever since. Scientific, social and political changes occur in random patterns. Results have been mixed in equally random patterns. The slowdown of the music industry and media has not been matched in the publishing industry and film. The failed twitter revolution of Iran had quite a few things in common with the Arab spring, but even in the Maghreb the results differed wildly. Social networks have impacted societies sometimes in exact opposites—while the fad of social networks have resulted in cultural and isolation in Western society, it has created a counterforce of collective communication against the strategies of the Chinese party apparatus to isolate it's citizenry from within.

Most of these phenomena have been only observed, not explained by now. It is mostly in hindsight that a linear narrative is constructed, if not imposed on. The inability of many of the greatest digital innovations like viral videos or social networks to be monetized are just one of many proofs how difficult it is to get a comprehensive grasp on digital history. Moore's law and it's numerous popular applications to other fields of progress thus create an illusion of predictability in the least predictable of all fields—the course of history.

These errors of reasoning will be amplified, if Moore's Law is allowed to come to it's natural end. Peak theories have become the lore of cultural pessimism. If Moore's law is allowed to become a finite principle, digital progress will be perceived as a linear progression towards a peak and an end. Neither will become a reality, because the digital is not a finite resource, but an infinite realm of mathematical possibilities reaching out into the analog world of sciences, society, economics and politics. Because this progress has ceased to depend on quantifiable basis and on linear narratives it will not be brought to a halt, not even slowed down, if one of it's strains comes to an end.

In 1972 the wheat and chessboard problem became the mythological basis for the Club of Rome's Malthusian "The Limits to Growth". Moore's will create the disillusionment of a finite nature of the digital. It will become as popular as it's illusion of predictability. After all there have bee no loonies carrying signs saying "The End is Not Near".

david_berreby's picture
Journalist; Author, Us and Them

In the late summer of 1914, as European civilization began its extended suicide, dissenters were scarce. On the contrary: From every major capital, we have jerky newsreel footage of happy crowds, cheering in the summer sunshine. More war and oppression followed in subsequent decades, and there was never a shortage of willing executioners and obedient lackeys. By mid-century, the time of Stalin and Mao and their smaller-bore imitators, it seemed urgent to understand why people throughout the 20th century had failed to rise up against masters who sent them to war, or to concentration camps, or to the gulag. So social scientists came up with an answer, which was then consolidated and popularized into something every educated person supposedly knows: People are sheep—cowardly, deplorable sheep.

This idea, that most of us are unwilling to "think for ourselves," instead preferring to stay out of trouble, obey the rules, and conform, was supposedly established by rigorous laboratory experiments. ("We have found," wrote the great psychologist Solomon Asch in 1955, "the tendency to conform in our society is so strong that reasonably intelligent and well-meaning young people are willing to call white black.")  Plenty of research papers still refer to one or another aspect of the sheep model as if it were a truth universally acknowledged, and a sturdy rock on which to build new hypotheses about mass behavior. Worse yet, it's rampant in the conversation of educate laypeople—politicians, voters, government officials. Yet it is false. It makes for bad assumptions and bad policies. It is time to set it aside.

Some years ago, the psychologists Bert Hodges and Anne Geyer examined one of Asch's own experiments from the 1950s. He'd asked people to look at a line printed on a white card and then tell which of three similar lines was the same length. Each volunteer was sitting in a small group, all of whose other members were actually collaborators in the study, deliberately picking wrong answers. Asch reported that when the group chose the wrong match, many individuals went along, against the evidence of their own senses.

But the experiment actually involved 12 separate comparisons for each subject, and most did not agree with the majority, most of the time. In fact, on average, each person agreed three times with the majority, and insisted on his own view nine other times. To make those results all about the evils of conformity is to say, as Hodges and Geyer note, that "an individual's moral obligation in the situation is to 'call it as he sees it' without consideration of what others say.''

To explain their actions, the volunteers didn't indicate that their senses had been warped or that they were terrified of going against consensus. Instead, they said they had chosen to go along that one time. It's not hard to see why a reasonable person would do so.

The "people are sheep" model sets us up to think in terms of obedience or defiance, dumb conformity versus solitary self-assertion (to avoid being a sheep, you must be a lone wolf). It does not recognize that people need to place their trust in others, and win the trust of others, and that this guides their behavior. (Stanley Milgram's famous experiments, where men were willing to give severe shocks to a supposed stranger, are often cited as Exhibit A for the "people are sheep" model. But what these studies really tested was the trust the subjects had in the experimenter.)

Indeed, questions about trust in others—how it is won and kept, who wins it and who doesn't—seem to be essential to understanding how collectives of people operate, and affect their members. What else is at work?

It appears that behavior is also susceptible to the sort of moment-by-moment influences that were once considered irrelevant noise (for example, divinity students in a rush were far less likely to help a stranger than were divinity students who were not late, in an experiment performed by John M. Darley and Dan Batson). And then there is mounting evidence of influences that discomfit psychologists because there doesn't seem to be much psychology in them at all. For example, Neil Johnson of the University of Miami and Michael Spagat of University College London and their colleagues have found the severity and timing of attacks in many different wars (different actors, different stakes, different cultures, different continents) adheres to a power law. If that's true, then an individual fighter's motivation, ideology, and beliefs make much less difference than we think for the decision to attack next Tuesday.

Or, to take another example, if as Nicholas Christakis' work suggests, your risks of smoking, getting an STD, catching the flu or being obese depend in part on your social network ties, then how much difference does it make what you, as an individual, feel or think?

Perhaps the behavior of people in groups will eventually be explained as a combination of moment-to-moment influences (like waves on the sea) and powerful drivers that work outside of awareness (like deep ocean currents). All the open questions are important and fascinating. But they're only visible after we give up the simplistic notion that we are sheep.

 

jay_rosen's picture
Associate Professor of Journalism, New York University

We should retire the idea that goes by the name "information overload." It is no longer useful.

The Internet scholar Clay Shirky puts it well: "There's no such thing as information overload. There's only filter failure." If your filters are bad there is always too much to attend to, and never enough time. These aren't trends powered by technology. They are conditions of life.

Filters in a digital world work not by removing what is filtered out; they simply don't select for it. The unselected material is still there, ready to be let through by someone else's filter. Intelligent filters, which is what we need, come in three kinds:

  • A smart person who takes in a lot and tells you what you need to know. The ancient term for this is "editor." The front page of the New York Times still works this way.
     
  • An algorithm that sifts through the choices other smart people have made, ranks them, and presents you with the top results. That's how Google works— more or less.
     
  • A machine learning system that over time gets to know your interests and priorities and filters the world for you in a smarter and smarter way. Amazon uses systems like that.

Here's the best definition of information that I know of: information is a measure of uncertainty reduced. It's deceptively simple. In order to have information, you need two things: an uncertainty that matters to us (we're having a picnic tomorrow, will it rain?) and something that resolves it (weather report.) But some reports create the uncertainty that is later to be solved.

Suppose we learn from news reports that the National Security Agency "broke" encryption on the Internet. That's information! It reduces uncertainty about how far the U.S. government was willing to go. (All the way.) But the same report increases uncertainty about whether there will continue to be a single Internet, setting us up for more information when that larger picture becomes clearer. So information is a measure of uncertainty reduced, but also of uncertainty created. Which is probably what we mean when we say: "well, that raises more questions than it answers."

Filter failure occurs not from too much information but from too much incoming "stuff" that neither reduces existing uncertainty nor raises questions that count for us. The likely answer is to combine the three types of filtering: smart people who do it for us, smart crowds and their choices, smart systems that learn by interacting with us as individuals. It's at this point that someone usually shouts out: what about serendipity? It's a fair point. We need filters that listen to our demands, but also let through what we have no way to demand because we don't know about it yet. Filters fail when they know us too well and when they don't know us well enough.

benjamin_k_bergen's picture
Associate Professor, Cognitive Science, University of California, San Diego; Author, What the F: What Swearing Reveals About Our Language, Our Brains, and Ourselves

The world's languages differ to the point of inscrutability. Knowing the English word "duck" doesn't help you guess the French "canard" or Japanese "ahiru." But there are commonalities hidden beneath the superficial differences. For instance, human languages tend to have parts of speech (like nouns and verbs). They tend to have ways to embed propositions in other ones. ("John knows that Mary thinks that Paul embeds propositions in other ones.") And so on. But why?

An influential and appealing explanation is known as Universal Grammar: core commonalities across languages exist because they are part of our genetic endowment. On this view, humans are born with an innate predisposition to develop languages with very specific properties. Infants expect to learn a language that has nouns and verbs, that has sentences with embedded propositions, and so on.

This could explain not only why languages are similar but also what it is to be uniquely human and indeed how children acquire their native language. It may also seem intuitively plausible, especially to people who speak several languages: If English (and Spanish… and French!) have nouns and verbs, why wouldn't every language? To date, Universal Grammar remains one of the most visible products of the field of Linguistics—the one minimally counterintuitive bit that former students often retain from an introductory Linguistics class.

But evidence has not been kind to Universal Grammar. Over the years, field linguists (they're like field biologists with really good microphones) have reported that languages are much more diverse than originally thought. Not all languages have nouns and verbs. Nor do all languages let you embed propositions in others. And so it has gone for basically every proposed universal linguistic feature. The empirical foundation has crumbled out from under Universal Grammar. We thought that there might be universals that all languages share and we sought to explain them on the basis of innate biases. But as the purportedly universal features have revealed themselves to be nothing of the sort, the need to explain them in categorical terms has evaporated. As a result, what can plausibly make up the content of Universal Grammar has become progressively more and more modest over time. At present, there's evidence that nothing but perhaps the most general computational principles are part of our innate language-specific human endowment.

So it's time to retire Universal Grammar. It had a good run, but there's nothing much it can bring us now in terms of what we want to know about human language. It can't reveal much about how language develops in children—how they learn to articulate sounds, to infer the meanings of words, to put together words into sentences, to infer emotions and mental states from what people say, and so on. And the same is true for questions about how humans have evolved or how we differ from other animals. There are ways in which humans are unique in the animal kingdom and a science of language ought to be trying to understand these. But again Universal Grammar, gutted by evidence as it has been, will not help much.

Of course, it remains important and interesting to ask what commonalities, superficial and substantial, tie together the world's languages. There may be hints there about how human language evolved and how it develops. But to ignore its diversity is to set aside the most informative dimension of language. 

alan_guth's picture
Cosmologist; Victor F. Weisskopf Professor of Physics, MIT; Inaugural Recipient, Fundamental Physics Prize; Author, The Inflationary Universe

The roots of this issue go back at least to 1865, when Rudolf Clausius coined the term "entropy" and stated that the entropy of the universe tends to a maximum. This idea is now known as the second law of thermodynamics, which is most often described by saying that the entropy of an isolated system always increases or stays constant, but never decreases. Isolated systems tend to evolve toward the state of maximum entropy, the state of thermodynamic equilibrium. Even though entropy will play a crucial role in this discussion, it will suffice to use a fairly crude definition: entropy is a measure of the "disorder" of the physical system. In terms of the underlying quantum description, entropy is a measure of the number of quantum states that correspond to a given description in terms of macroscopic variables, such as temperature, volume, and density.

The classic example is a gas in a closed box. If we start with all the gas molecules in a corner of the box, we can imagine watching what happens next. The gas molecules will fill the box, increasing the entropy to the maximum. But it never goes the other way: if the gas molecules fill the box, we will never see them spontaneously collect into one corner.

This behavior seems very natural, but it is hard to reconcile with our understanding of the underlying laws of physics. The gas makes a huge distinction between the past and the future, always evolving toward larger entropy in the future. This one-way behavior of matter in bulk is called the "arrow of time." Nonetheless, the microscopic laws that describe collisions of molecules are time-symmetric, making no distinction between past and future.

Any movie of a collision could be played backwards, and it would also show a valid picture of a collision. (To account for some very rare events discovered by particle physicists, the movie is only guaranteed to be valid if it is also reflected in a mirror and has every particle relabeled as the corresponding antiparticle. But these complications do not change the key issue.)

There is an important problem, therefore, which is over a century old, to understand how the arrow of time could possibly arise from time-symmetric laws of evolution.

The arrow-of-time mystery has driven physicists to seek possible causes within the laws of physics that we observe, but to no avail. The laws make no distinction between the past and the future. Physicists have understood, however, that a low entropy state is always likely to evolve into a higher entropy state, simply because there are many more states of higher entropy. Thus, the entropy today is higher than the entropy yesterday, because yesterday the universe was in a low entropy state. And it was in a low entropy state yesterday, because the day before it was in an even lower entropy state. The traditional understanding follows this pattern back to the origin of the universe, attributing the arrow of time to some not well-understood property of cosmic initial conditions, which created the universe in a special low entropy state. As Brian Greene wrote in The Fabric of the Cosmos:

"The ultimate source of order, of low entropy, must be the big bang itself. ... The egg splatters rather than unsplatters because it is carrying forward the drive toward higher entropy that was initiated by the extraordinarily low entropy state with which the universe began."

Based on an elaboration of a 2004 proposal by Sean Carroll and Jennifer Chen, there is a possibility of a new solution to the age-old problem of the arrow of time. This work, by Sean Carroll, Chien-Yao Tseng, and me, is still in the realm of speculation, and has not yet been vetted by the scientific community.

But it seems to provide a very attractive alternative to the standard picture. The standard picture holds that the initial conditions for the universe must have produced a special, low entropy state, because it is needed to explain the arrow of time. (No such assumption is applied to the final state, so the arrow of time is introduced through a time-asymmetric condition.) We argue, to the contrary, that the arrow of time can be explained without assuming a special initial state, so there is no longer any motivation for the hypothesis that the universe began in a state of extraordinarily low entropy. The most attractive feature is that there is no longer a need to introduce any assumptions that violate the time symmetry of the known laws of physics.

The basic idea is simple. We don't really know if the maximum possible entropy for the universe is finite or infinite, so let's assume that it is infinite. Then, no matter what entropy the universe started with, the entropy would have been low compared to its maximum. That is all that is needed to explain why the entropy has been rising ever since!

The metaphor of the gas in a box is replaced by a gas with no box. In the context of what physicists call a "toy model," meant to illustrate a basic principle without trying to be otherwise realistic, we can imagine choosing, in a random and time-symmetric way, an initial state for a gas composed of some finite number of noninteracting particles. It is important here that any well-defined state will have a finite value for the entropy, and a finite value for the maximum distance of any particle from the origin of our coordinate system. If such a system is followed into the future, the particles might move inward or outward for some finite time, but ultimately the inward-moving particles will pass the central region and will start moving outward. All particles will eventually be moving outward, and the gas will continue indefinitely to expand into the infinite space, with the entropy rising without limit. An arrow of time—the steady growth of entropy with time—has been generated, without introducing any time-asymmetric assumptions.

An interesting feature of this picture is that the universe need not have a beginning, but could be continued from where we started in both directions of time. Since the laws of evolution and the initial state are time-symmetric, the past will be statistically equivalent to the future. Observers in the deep past will see the arrow of time in the opposite direction from ours, but their experience will be no different from ours.

todd_c_sacktor's picture
Distinguished Professor of Physiology, Pharmacology, and Neurology, State University of New York Downstate Medical Center

For over a century psychological theory held that once memories are consolidated from a short-term into long-term form, they remain stable and unchanging. Whether certain long-term memories are very slowly forgotten or are always present but cannot be retrieved was a matter of debate.

For the last 50 years, research on the neurobiological basis of memory seemed to support the psychological theory. Short-term memory was found to be mediated by biochemical changes at synapses, modifying their strength. Long-term memory was strongly correlated with long-term changes in the number of synapses, either increases or decreases. This intuitively made sense. Biochemical changes were rapid and could be quickly reversed, just like short-term memories. On the other hand, synapses although small were anatomical structures, visible under the microscope, and thus were thought to be stable for weeks, perhaps for years. Short-term memories could easily be prevented from consolidating into the long-term by dozens of inhibitors of different signaling molecules. In contrast, there was no known agent that erased a long-term memory.

Two recent lines of evidence have relegated this dominant theory of long-term memory ready for retirement. First is the discovery of reconsolidation. When memories are recalled, they undergo a brief period in which they are once again susceptible to disruption by many of the same biochemical inhibitors that affect the initial conversion of short- into long-term memory. This means that long-term memories are not immutable, but can be converted into short-term memory, and then reconverted back into long-term memory. If this reconversion doesn't happen, the specific long-term memory is effectively disrupted.

The second is the discovery of a few agents that do indeed erase long-term memories. These include inhibitors of the persistently active enzyme PKMzeta and of a protein translation factor with prion-like properties of perpetuation. Conversely, increasing the activity of the molecules enhances old memories. The persistent changes in synapse number that so strongly correlate with long-term memory may therefore be downstream of persistent biochemical changes. That memory erasing agents are so few suggests that there may be a relatively simple mechanism for long-term memory storage involving not hundreds of molecules as in short-term memory, but only a handful, perhaps working together.

Memory reconsolidation allows specific long-term memories to be manipulated. Memory erasure is extraordinarily potent and likely disrupts many, if not all long-term memories at the same time. When these two fields are combined, specific long-term memories will be erased or strengthened in ways never conceivable in prior theories.

robert_sapolsky's picture
Neuroscientist, Stanford University; Author, Behave

The year 2013 has just finished and, as is the case at this time of year, media pundits suggest a variety of words and terms that should be banned; some of the most common ones have included, "YOLO," "bromance," "selfie," "mancave," and, of course, please God make it so, "twerking." In these cases, it's not because the terms are wrong, but just because they've become ubiquitous and irritating.

Similarly, some things in the science world beg to be retired. That's rarely the case simply because a term has been ubiquitous and irritating. "Genomic revolution" might be one of those. Another might be, "For 99% of hominid history…," when discussing what humans do in a less artificial setting than our modern world. Personally, I hope this phrase won't be retired, as I use it ubiquitously and irritatingly, with no plans to stop otherwise.

However, various science concepts should be retired because they are just plain wrong. An obvious example, more pseudo-science than science, is that evolution is "just" a theory. But what I am focusing on is a phrase that is right in the narrow sense, but carries very wrong connotations. This is the idea of "a gene-environment interaction."

The notion of the effects of a particular gene and of a particular environment interacting was a critical counter to the millennia-old dichotomy of nature versus nurture. Its utility in that realm most often took the form of, "It may not be all genetic—don't forget that there may be a gene-environment interaction," rather than, "It may not be all environmental—don't forget that there may be a gene-environmental interaction."

The concept was especially useful when expressed quantitatively, in the face of behavior geneticist's attempts to attribute percentages of variability in a trait to environment versus to genes. It also was the basis of a useful rule of thumb phrase for non-scientists – "But only if." "You can often say that Gene A causes Effect X, although sometimes it is more correct to say that Gene A causes Effect X, 'but only if' it is in Environment Z. In that case, you have something called a gene-environment interaction."

What's wrong with any of that? It's an incalculably large improvement over "nature or nurture?", especially when a supposed answer to that question has gotten into the hands of policy makers or ideologues.

My problem with the concept is with the particularist use of "a" gene-environment interaction, the notion that there can be one. This is because, at the most benign, this implies that there can be cases where there aren't gene-environment interactions. Worse, that those cases are in the majority. Worst, the notion that lurking out there is something akin to a Platonic ideal as to every gene's actions—that any given gene has an idealized effect, that it consistently "does" that, and that circumstances where that does not occur are rare and represent either pathological situations or inconsequential specialty acts. Thus, a particular gene may have a Platonically "normal" effect on intelligence unless, of course, the individual was protein malnourished as a fetus, had untreated phenylketonuria, or was raised as a wild child by meerkats.

The problem with "a" gene-environment interaction is that there is no gene that does something. It only has a particular effect in a particular environment, and to say that a gene has a consistent effect in every environment is really only to say that it has a consistent effect in all the environments in which it has been studied to date. This has become ever more clear in studies of the genetics of behavior, as there has been increasing appreciation of environmental regulation of epigenetics, transcription factors, splicing factors, and so on. And this is most dramatically pertinent to humans, given the extraordinary range of environments—both natural and culturally constructed—in which we live.

The problem with "a gene-environment interaction" is the same as asking what height has to do with the area of a rectangle, and being told that in this particular case, there is a height/length interaction.

andrei_linde's picture
Theoretical Physicist, Stanford; Father of Eternal Chaotic Inflation; Inaugural Recipient, Fundamental Physics Prize

For most of the 20th century, scientific thought was dominated by the idea of the uniformity of the universe and the uniqueness of laws of physics. Indeed, the cosmological observations indicated that the universe on the largest possible scales is almost exactly uniform, with the accuracy better than 1 in 10,000.

The situation is similar with respect to the uniqueness of the laws of physics. We knew, for example, that the electron mass is the same everywhere in the observable part of the universe, so the obvious assumption was that it must take the same value everywhere—that it was just a constant of nature. For a long time, one of the great goals of physics was to find a single theory—a Theory of Everything— that would unify all fundamental interactions and provide an unambiguous explanation for all known parameters of particle physics.

Some thirty years ago, a possible explanation arose for the uniformity of the universe. The main idea was that our part of the world emerged as a result of an exponentially rapid stretching of space called cosmic inflation. As all "wrinkles" and non-uniformities of space stretched out and disappeared, the universe became incredibly smooth. Add some quantum fluctuations, stretch them, and the uniformity became just a little bit less perfect, and galaxies emerged.

At first, inflationary theory looked like an exotic product of vivid imagination. But thanks to the enthusiastic work of thousands of scientists, many predictions of this theory have been confirmed by observations. And if the theory is correct, we finally have a scientific explanation of why the world is so uniform.

But inflation does not predict that this uniformity must extend beyond the observable part of the universe. To give an analogy: Suppose the universe is a surface of a big soccer ball consisting of black and white hexagons. If we inflate it, the size of each white or black part becomes exponentially large. If inflation is powerful enough, those who live in a black part of the universe will not ever see the white part. They will believe that the whole universe is black, and they will try to find a scientific explanation why it cannot have any other color. Those who live in a white universe will never see the black parts and therefore they may think that the whole world must be white. But both black and white parts may coexist in an inflationary universe without contradicting observations.

In the example given above, we were talking about black and white. But in physics, the number of different states of matter (the number of "colors") can be exponentially large. The best candidate for a Theory of Everything is string theory. It can be successfully formulated in spacetime with ten dimensions (nine dimensions of space and one of time). But we live in the universe with three dimensions of space. Where are other six? The answer is that they are compactified—squeezed into something so small that we cannot move in these directions, which is why we perceive the world as if it were three-dimensional.

From the early days of string theory, physicists knew that there are exponentially many different ways to compactly the extra 6 dimensions, but we did not know what can prevent the compactified dimensions from blowing up. This problem was solved about 10 years ago, and the solution validated the earlier expectations of the exponentially large number of possibilities. Some estimates of the number of different options are as large as 10500. And each of these options describes a part of the universe with a different vacuum energy and different types of matter.

In the context of the inflationary theory, this means that our world may consist of incredibly large number of exponentially large "universes" with 10500 different types of matter inside them.

A pessimist would argue that since we do not see other parts of the universe, we cannot prove that this picture is correct. An optimist, on the other hand, may counter that we can never disprove this picture either, because its main assumption is that other "universes" are far away from us. And since we know that the best of the theories developed so far allow about 10500 different universes, anybody who argues that the universe must have same properties everywhere would have to prove that only one of these 10500 universes is possible.

And then there is something else: There are many strange coincidences in our world. The mass of the electron is 2000 times smaller than the mass of the proton. Why? The only known reason is that if it would change few times, life as we know it would be impossible. The masses of the proton and neutron almost coincide. Why? If one of their masses would change just a little, life as we know it would be impossible. The energy of empty space in our part of the universe is not zero, but a tiny number, more than 100 orders of magnitude below the naive theoretical expectations. Why? The only known explanation is that we would be unable to live in the world with a much larger energy of vacuum.

The correlation between our properties and the properties of the world is called the anthropic principle. But if the universe came in only one copy, this correlation would not explain why. We would need to speculate about the divine cause making the universe custom-built for humans. However, with a multiverse consisting of many different parts with different properties, the correlation between our properties and the properties of the part of the world where we can live makes perfect sense.

Can we return back to the old picture of a single universe? Possibly. But in order to do so, we must (1) invent a better cosmological theory, (2) invent a better theory of fundamental interactions, and (3) propose an alternative explanation for the miraculous coincidences we just discussed. 

nina_jablonski's picture
Biological Anthropologist and Paleobiologist; Evan Pugh University Professor of Anthropology at Pennsylvania State University

Race has always been a vague and slippery concept. In the mid-eighteenth century, European naturalists such as Linnaeus, Comte de Buffon, and Johannes Blumenbach described geographic groupings of humans who differed in appearance. The philosophers David Hume and Immanuel Kant both were fascinated by human physical diversity. In their opinions, extremes of heat, cold, or sunlight extinguished human potential. Writing in 1748, Hume contended that, "there was never a civilized nation of any complexion other than white."

Kant felt similarly. He was preoccupied with questions of human diversity throughout his career, and wrote at length on the subject in a series of essays beginning in 1775. Kant was the first to name and define the geographic groupings of humans as races (in German, Rassen). Kant's races were characterized by physical distinctions of skin color, hair form, cranial shape, and other anatomical features and by their capacity for morality, self-improvement, and civilization. Kant's four races were arranged hierarchically, with only the European race, in his estimation, being capable of self-improvement.

Why did the scientific racism of Hume and Kant prevail in the face of the logical and thoughtful opposition of von Herder and others? During his lifetime, Kant was recognized as a great philosopher, and his status rose as copies of his major philosophical works were distributed and read widely in the nineteenth century. Some of Kant's supporters agreed with his racist views, some apologized for them, or—most commonly—many just ignored them. The other reason that racist views triumphed over anti-racism in the late eighteenth and nineteenth centuries was that racism was, economically speaking, good for the transatlantic slave trade, which had become the overriding engine of European economic growth. The slave trade was bolstered by ideologies that diminished or denied the humanity of non-Europeans, especially Africans. Such views were augmented by newer biblical interpretations popular at the time that depicted Africans as destined for servitude. Skin color, as the most noticeable racial characteristic, became associated with a nebulous assemblage of opinions and hearsay about the inherent natures of the different races. Skin color stood for morality, character, and the capacity for civilization; it had become a meme. The nineteenth and early twentieth centuries saw the rise of "race science." The biological reality of races was confirmed by new types of scientific evidence amassed by new types of scientists, notably anthropologists and geneticists. This era witnessed the birth of eugenics and its offspring, the concept of racial purity. The rise of Social Darwinism further reinforced the notion that the superiority of the white race was part of the natural order. The fact that all people are products of complex genetic mixtures resulting from migration and intermingling over thousands of years was not admitted by the racial scientists, nor by the scores of eugenicists who campaigned on both sides of the Atlantic for the improvement of racial quality.

The mid-twentieth century witnessed the continued proliferation of scientific treatises on race. By the 1960s, however, two factors contributed to the demise of the concept of biological races. One of these was the increased rate of study of the physical and genetic diversity human groups all over the world by large numbers of scientists. The second factor was the increasing influence of the civil rights movement in the United States and elsewhere. Before long, influential scientists denounced studies of race and races because races themselves could not be scientifically defined. Where scientists looked for sharp boundaries between groups, none could be found.

Despite major shifts in scientific thinking, the sibling concepts of human races and a color-based hierarchy of races remained firmly established in mainstream culture through the mid-twentieth century. The resulting racial stereotypes were potent and persistent, especially in the United States and South Africa, where subjugation and exploitation of dark-skinned labor had been the cornerstone of economic growth.

After its "scientific" demise, race remained as a name and concept, but gradually came to stand for something quite different. Today many people identify with the concept of being a member of one or another racial group, regardless of what science may say about the nature of race. The shared experiences of race create powerful social bonds. For many people, including many scholars, races cease to be biological categories and have become social groupings. The concept of race became a more confusing mélange as social categories of class and ethnicity. So race isn't "just" a social construction, it is the real product of shared experience, and people choose to identify themselves by race.

Clinicians continue to map observed patterns of health and disease onto old racial concepts such as "White", "Black" or "African American", "Asian," etc. Even after it has been shown that many diseases (adult-onset diabetes, alcoholism, high blood pressure, to name a few) show apparent racial patterns because people share similar environmental conditions, grouping by race are maintained. The use of racial self-categorization in epidemiological studies is defended and even encouraged. In most cases, race in medical studies is confounded with health disparities due to class, ethnic differences in social practices, and attitudes, all of which become meaningless when sufficient variables are taken into account.

Race's latest makeover arises from genomics and mostly within biomedical contexts. The sanctified position of medical science in the popular consciousness gives the race concept renewed esteem. Racial realists marshal genomic evidence to support the hard biological reality of racial difference, while racial skeptics see no racial patterns. What is clear is that people are seeing what they want to see. They are constructing studies to provide the outcomes they expect. In 2012, Catherine Bliss argued cogently that race today is best considered a belief system that "produces consistencies in perception and practice at a particular social and historical moment".

Race has a hold on history, but it no longer has a place in science. The sheer instability and potential for misinterpretation render race useless as a scientific concept. Inventing new vocabularies of human diversity and inequity won't be easy, but is necessary. 

seirian_sumner's picture
Reader, Behavioral Ecology, University College London

Genes and their interaction networks determine the phenotype of an organism—what it looks like and how it behaves. One of the biggest problems in modern evolutionary biology is understanding the relationship between genes and phenotypes. The prevailing theory is that all animals are built from essentially the same set of regulatory genes—a genetic toolkit, and that phenotypic variation within and between species arises simply by using shared genes differently. Scientists are now generating a vast amount of genomic data from an eclectic mix of organisms. These data are telling us to put to bed the idea that all life is underlain by a common toolkit of conserved genes. Instead, we need to turn our attention to the role of genomic novelty in the evolution of phenotypic diversity and innovation.

The idea of a conserved genetic toolkit of life comes from the 'evo-devo' (evolutionary and developmental biology) world. In short, it proposes that evolution uses the same ingredients in all organisms, but tinkers with the recipe. By expressing genes at different times in development and/or in different parts of the body, the same genes can be used in different combinations to allow evolvability, generating phenotypic diversity and innovation. Animals look different not because the molecular machinery is different, but because different parts of the machinery are activated to differing degrees, at different times, in different places and in different combinations. The number of combinations is huge, and so this is a plausible explanation for the development of complex and diverse phenotypes from even a small number of genes. For example, humans have a mere 21,000 genes in our genome, and yet we are arguably one of the most complex products of evolution.

A text-book example is the super-controller of development, Hox genes—a set of genes which tell bodies where to develop heads, tails, arms, legs, in every major animal group. Hox genes are in mice, worms, humans… they are inherited from a common ancestor. Other examples of toolkt genes are those that control eye development, or hair/plumage colouration. Toolkit genes are old, present in all animals and they do pretty much the same thing in all animals. There is no denying that conserved genomic material forms an important part of the molecular building blocks of life.

However. We can now sequence de novo the genomes and transcriptomes (the genes expressed at any one time/place) of any organism. We have sequence data for algae, pythons, green sea turtles, puffer fish, pied flycatchers, platypus, koala, bonobos, giant pandas, bottle-nosed dolphins, leafcutter ants, monarch butterfly, pacific oysters, leeches…the list is growing exponentially. And each new genome brings with it a suit of unique genes. Twenty percent of genes in nematodes are unique. Each lineage of ants contains about 4000 novel genes, but only 64 of these are conserved across all seven ant genomes sequenced so far.

Many of these unique ('novel') genes are proving important in the evolution of biological innovations. Morphological differences between closely related fresh water polyps, Hydra, can be attributed to a small group of novel genes. Novel genes are emerging as important in the worker castes of bees, wasps and ants. Newt-specific genes may play a role in their amazing tissue regenerative powers. In humans, novel genes are associated with devastating diseases, such as leukaemia and Alhzeimer's.

Life is genomically complex, and this complexity plays a crucial role in evolving diversity of life. It's easy to see how an innovation can be improved through natural selection, e.g. once the first eye evolved, it was subject to strong selection to increase the fitness (survival) of its owner. It is more challenging to explain how novelty first originates, especially from a conserved genomic toolkit. Darwinian evolution explains how organisms and their traits evolve, but not how they originate. How did the first eye arise? Or more specifically how did that master regulatory gene for eye development in all animals first originate? The capacity to evolve novel phenotypic traits (be they morphological, physiological or behavioural) is crucial for survival and adaptation, especially in changing (or new) environments.

A conserved genome can generate novelties through rearrangements (within or between genes), changes in regulation or genome duplication events. For example, the vertebrate genome has been replicated in their entirety twice in their evolutionary history; salmonid fish have undergone a further two whole genome duplications. Duplications reduce selection on the function of one of the gene copies, allowing that copy to mutate and evolve into a new gene whilst the other copy maintains business as usual. Conserved genomes can also harbour a lot of latent genetic variation—fodder for evolving novelty—which is not exposed to selection. Non-lethal variation can lie dormant in the genome by not being expressed, or by being expressed at times when it doesn't have a lethal effect on the phenotype. The molecular machinery that regulates expression of genes and proteins depends on minimal information, rules and tools: transcription factors recognise sequences of only a few base-pairs as binding sites, which gives them enormous potential for plasticity in where they bind. Pleiotropic changes across many conserved genes using different combination of transcription, translation and/or post-translation activity are a good source of genomic novelty. E.g. the evolution of beak shapes in Darwin's finches is controlled by pleiotropic changes brought about by changes in the signalling patterns of a conserved gene that controls bone development. The combinatorial power of even a limited genetic toolkit gives it enormous potential to evolve novelty from old machinery.

But the presence of unique genes in all evolutionary lineages studied to date now tells us that de novo gene birth, rather than a reordering of old ingredients, is important in phenotypic evolution. The over-abundance of non-coding DNA in genomes is less puzzling, if they are a melting pot for genomes to exploit and create new genes and gene function, and ultimately phenotypic innovation. The current thinking is that genomes are constantly producing new genes all the time, but that only a few become functional.

Our story started simply: all life is a product of gentle evolutionary tinkering of a shared molecular toolkit. The unimaginable time has arrived where we can unpack the molecular building blocks of any creature. And these data are shaking things up. What a surprise? Not really. Perhaps the most important lesson from this is that no theory is completely right, and that good theories are those that continue evolving and embracing innovation. Let's evolve theories (keeping the bits that are proven correct), not retire them.

scott_sampson's picture
President & CEO, Science World British Columbia; Dinosaur paleontologist and science communicator; Author, How To Raise A Wild Child

One of the most prevalent ideas in science is that nature consists of objects. Of course the very practice of science is grounded in objectivity. We objectify nature so that we can measure it, test it, and study it, with the ultimate goal of unraveling its secrets. Doing so typically requires reducing natural phenomena to their component parts. Most zoologists, for example, think of animals in terms of genes, physiologies, species, and the like. 

Yet this pervasive, centuries-old trend toward reductionism and objectification tends to prevent us from seeing nature as subjects, though there's no science to support such myopia. On the contrary, to give just one example, perhaps the deepest lesson cascading from Darwin's contributions is that all life on Earth, including us, arose from a single family tree. To date, however, this intellectual insight has yet to penetrate our hearts. Even those of us who fully embrace the notion of organic evolution tend to regard nature as resources to be exploited rather than relatives deserving of our respect.

What if science were to conceive of nature as both object and subject? Would we need to abandon our cherished objectivity? Of course not. Despite their chosen field of study, the vast bulk of social scientists don’t struggle to form emotional bonds with family and friends. More so than at any point in the history of science, it's time to extend this subject-object duality to at least the nonhuman life forms with which we share this world. 

Why? Because much of our unsustainable behavior can be traced to a broken relationship with nature, a perspective that treats the nonhuman world as a realm of mindless, unfeeling objects. Sustainability will almost certainly depend upon developing mutually enhancing relations between humans and nonhuman nature. Yet why would we foster such sustainable relations unless we care about the natural world?

An alternative worldview is called for, one that reanimates the living world. This mindshift, in turn, will require no less than the subjectification of nature. Of course, the notion of nature-as-subjects is not new. Indigenous peoples around the globe tend to view themselves as embedded in animate landscapes replete with relatives; we have much to learn from this ancient wisdom. 

To subjectify is to interiorize, such that the exterior world interpenetrates our interior world. Whereas the relationships we share with subjects often tap into our hearts, objects are dead to our emotions. Finding ourselves in relationship, the boundaries of self can become permeable and blurred. Many of us have experienced such transcendent feelings during interactions with nonhuman nature, from pets to forests.

But how might we undertake such a grand subjectification of nature? After all, worldviews become deeply ingrained, so much so that they become like the air we breathe—essential but ignored.

Part of the answer is likely to be found in the practice of science itself. The reductionist Western tradition of science has concentrated overwhelmingly on the nature of substance, asking "What is it made of." Yet a parallel approach—also operating for centuries, though often in the background—has investigated the science of pattern and form. Generally tied to Leonardo da Vinci, the latter method has sought to explore relationships, which can be notoriously difficult to quantify and must instead be mapped. The science of patterns has seen a recent resurgence, with abundant attention directed toward such fields as ecology and complex adaptive systems. Yet we've only scratched the surface, and much more integrative work remains to be done that could help us understand relationships.

Another part of the answer, I think, is to be found in education. We need to raise our children so that they see the world with new eyes. At the risk of heresy, it seems that science education in particular could be re-invigorated with subjectification in mind. Certainly the practice of science—the actual doing of scientific research—must be done as objectively as possible. But the communication of science could be done using both objective and subjective lenses.

Imagine if the bulk of science education took place outdoors, in direct, multisensory contact with the natural world. Imagine if students were encouraged to develop a meaningful sense of place through an understanding of the deep history and ecological workings of that place. And imagine if mentors and educators emphasized not only the identification and functioning of parts (say, of flowers or insects), but the notion of organisms as sensate beings in intimate relationship with each other (and us). What if students were asked to spend more time learning about how a particular plant or animal experiences its world? 

In this way, science (and biology in particular) could help bridge the chasm between humans and nature. Ultimately, science education, in concert with other areas of learning, could go a long way toward achieving the "Great Work" described by cultural historian Thomas Berry—transforming the perceived world "from a collection of objects to a communion of subjects."

kai_krause's picture
Software Pioneer; Philosopher; Author, A Realtime Literature Explorer

It was born out of a mistranslation and has been misused ever since....but let us do a little thought experiment first:

Let's say you are a scientist and you noticed a phenomenon you would like to tell the world about. "The brain...", you say, "... can listen to a conversation and make sense of the frequencies, decode them into symbols and meaning... but when it is confronted with two such conversations simultaneously, it cannot deal with both threads in parallel. At best it can try to switch back and forth quickly, trying to keep up with the information."

So much for your theory—you formulate your findings and share it with colleagues, it gets argued and debated, just as it should be.

But now something odd happens: while all your discussions were in English, and you wrote it in English, and despite the fact that a large percentage of the leading scientists and Nobel Laureates are English speaking...somehow the prevailing language for publication is....Mongolian! There is a group in Ulaan Baatar, merrily taking your findings with great interest and your whole theory shows up all over the place...in Mongolian.

But here is the catch: you wrote that it is not possible to listen to two conversations at the same time, and thus their meaning to you is, well, undefined, until you decide to follow one of them properly.

However, as it turns out, Mongolian has no such word—"undefined"! Instead it got translated with an entirely different term: "uncertain", and the general interpretation of your theory has suddenly mutated from "one or the other of two conversations will be unknown to you" to the rather distinctly altered interpretation "you can listen to one, but the other will be.....entirely meaningless".

Saying that I am "unable to understand" both of them properly is one thing, but... my inability to perceive it does not render each of the conversations suddenly "meaningless", does it?

All of this is of course just an analogy. But it is pretty close to exactly what did happen—just the other way round: the scientist was Werner Heisenberg.

His observation was not about listening to simultaneous conversations but measuring the exact position and momentum of a physical system, which he described as impossible to determine at the same time. And although he discussed this with numerous colleagues in German (Einstein, Pauli, Schrödinger, Bohr, Lorentz, Born, Planck just to name some of the Solvay Conference group of 1927) the big step came in the dissemination in English, and there is the Mongolian in our analogy!

Heisenberg’s idea had quickly been dubbed Unschärferelation, which transliterates to “unsharpness relationship,” but as there is really no such term in English ('blurred', 'fuzzy', 'vague' or 'ambiguous' have all been tried), the translation ended up as "the Uncertainty Principle"—when he had not used either term at all (some point to Eddington). And what followed is really quite close to the analogy as well: rather than stating that either position or momentum are "as yet undetermined", it became common usage and popular wisdom to jump to the conclusion that there is complete "uncertainty" at the fundamental level of physics, and nature, even free will and the universe as such. Laplace's Demon killed as collateral damage (obviously his days were numbered anyway....)

Einstein remained skeptical his entire life: to him the "Unbestimmtheit" (Indeterminacy) was on the part of the observer: not realizing certain aspects of nature at this stage in our knowledge—rather than proof that nature itself is fundamentally undetermined and uncertain. In particular implications like the "Fernwirkung" (action at a distance) appeared to him "spukhaft" (spooky, eerie). But even in the days of quantum computing, qbits and tunnelling effects, I still would not want to bet against Albert ;) His intuitive grasp of nature survived so many critics and waves of counter-proof ended up counter-counter-proved.

And while there is plenty of reason to defend Heisenbergs findings, it is sad to see such a profound meme in popular science, which is merely based on a loose attitude towards translation ( and there are many other such cases...). I would love to encourage writers in French or Swedish or Arabic to point out the idiosyncracies and unique value of those languages—not for semantic pedantry but the benefit of alternate approaches.

German is not just good for Fahrvergnügen, Weltanschauung & Zeitgeist, there are many wonderful subtle shades of meaning. It is like a different tool to apply to thinking—and that's a good thing: a great hammer is a terrible saw.

oliver_scott_curry's picture
Senior Researcher, Director, The Oxford Morals Project, Institute of Cognitive and Evolutionary Anthropology, University of Oxford

How do birds fly? How do they stay up in the air? Suppose a textbook told you that the answer was 'levitation', and proceeded to catalogue the different types of levitation (Stationary, Mobile), its laws ("What goes up must come down", "Lighter things levitate longer") and constraints (Quadrupedalism). You'd rapidly realise that flying was not well understood, and also that the belief in levitation was obscuring the need for, and holding back, a proper scientific account of aerodynamics.

Unfortunately, a similar situation applies to the question 'How do animals learn?'. Textbooks will tell you that the answer is 'association', and will proceed to catalogue the various types (Classical, Operant), its laws (Rescorla-Wagner), and constraints (Autoshaping, Differential Conditionality, Blocking). You will be told that association is the ability of organisms to make connections between any given stimulus and any given outcome or response—the sound of a bell with the arrival of food, or the left-branch of a maze with the administration of pain—merely through (repeated) exposure to their pairing. And you will be told that, because association treats all stimuli equally, it can in principle enable an organism to learn anything.

The problem is that, as with levitation, no-one has ever set out a mechanism that could perform such a feat. And no-one ever will, because such a mechanism is not possible in theory, and hence not possible in practice. At any given time, an organism is confronted by an infinite number of potential stimuli, and subsequently, an infinite number of potential outcomes. A day in the life of a rat, for example, might include waking up, blinking, walking east, twitching its nose, being trampled on, eating a berry, hearing a rumbling noise, sniffing a mate, experiencing a temperature of 5°C, being chased, watching the sun go down, defecating, feeling nauseous, finding its way home, having a fight, going to sleep, and so on. How does the rat discern that, of all the possible combinations of stimuli and outcomes, it was the berry alone that made him feel sick? Just as answers presuppose a question, data presuppose a theory. In the absence of a prior theory that specifies what to look for, and which relationships to test, there is no way of sorting through this chaos to identify useful patterns. And yet what is the defining feature of associative learning? It is the absence of a prior theory. So, like levitation, association is hollow—a misleading redescription of the very phenomenon that is in need of explanation.

Critics have, for centuries, pointed out this problem with associationism (sometimes called the problem of induction, or the frame problem). And, in recent decades, there have been countless empirical demonstrations that animals—ants learning their way home, birds learning song, or rats learning to avoid food—do not learn in the way that associationism suggests. And yet, associationism (whether as empiricism, behaviourism, conditioning, connectionism, or plasticity) refuses to die, and keeps rising again, albeit encrusted by ever more ad hoc exceptions, anomalies and constraints. Its proponents refuse to abandon it, perhaps because they believe there is no alternative.

But there is. In communication theory, information is the reduction of prior uncertainty. Organisms are 'uncertain' because they are composed of conditional adaptations that adopt different states under different conditions. These mechanisms can be described in terms of the decision rules that they embody—'if A, then B', or 'If you detect light, then move towards it'. Uncertainty about which state to adopt (to B or not to B), is resolved by attending to the specified conditions (A). The reduction of uncertainty by one half constitutes one 'bit' of information; and so a single decision rule is a one-bit processor. By favouring adaptations with more branching decision rules, natural selection can design more sophisticated organisms that engage in more sophisticated information processing, asking more questions of the world before coming to a decision. This framework explains how animals acquire information and learn from their environments. For the rat, a rule is, "If you ate something and subsequently felt sick, then avoid that food in future"; it has no such rule fingering sunsets, nose twitching, or fighting, which is why it never makes those connections. Similarly, this account explains why organisms facing different ecological problems, composed of different clusters of such mechanisms, are able to learn different things.

So much for rats. What about humans, who obviously can learn things that natural selection never prepared them for? Surely we must be able to levitate? Not at all; the same logic of uncertainty and information processing must apply. If humans are able to learn novel things, then this must be because they are able to generate novel uncertainty—to invent, imagine, create new theories, hypotheses and predictions, and hence to ask new questions of the world. How? The most likely answer is that humans have a range of innate ideas about the world (to do with colour, shape, forces, objects, motion, agents and minds), which they are able to recombine (almost at random) in an endless variety of ways (as when we dream), and then test these novel conjectures against reality (by means of the senses). And successful conjectures are themselves recombined, and revised, to build ever more elaborate theoretical systems. So, far from constraining learning, our biology makes it possible: providing the raw materials, guiding the process to a greater or lesser degree, liberating us to think altogether unprecedented thoughts, and fostering the growth of knowledge. This is how we learn from experience—and all without a whiff of association.

Look, nobody disputes that birds fly; the only question is how. Similarly, nobody disputes that humans and other animals learn; the only question is how. Working out the alternative account of learning will involve identifying which innate ideas humans posses, what rules are used to combine them, and how they are revised. But for this to happen, we must first accept not only that association is not the answer, but that association is not even an answer. Only then will the science of learning stop levitating, and take off for real.

dimitar_d_sasselov's picture
Professor of Astronomy, Harvard University; Director, Harvard Origins of Life Initiative; Author, The Life of Super-Earths

The habitable zone defines those distances from a star where a planet similar to Earth would have the surface temperatures for water to be liquid. In the Solar System this zone stretches from in-between the orbits of Venus and Earth out to Mars. Its boundaries are approximate as they are applied to different planetary systems and sometimes the concept is used more broadly, e.g., to galaxies. The habitable zone concept has a venerable history in the search for alien life beyond Earth and most recently it contributed to the spectacular success of the NASA Kepler exoplanet-hunting mission. However, in the post-Kepler era it is a scientific concept that is ready for retirement.

The simple definition of the habitable zone is appealing to use in statistical estimates of habitable planets, because it depends on few parameters that are easy to measure. It is also easy to grasp: not too hot, not too cold—the Goldilocks zone. Simple and robust statistics are crucial to estimating the abundance and distribution of small planets like Earth in the Galaxy and the Kepler space mission excels at that. If our goal now is to search for life, then it is good to know where we should be heading to—habitable exoplanets. Then the "habitable" in the habitable zone is a misnomer, or a gross overstatement at the least. Even in our Solar System we contemplate alien life beyond its confines, e.g. on moons of Jupiter and Saturn. Today we need a concept of what makes an environment habitable—capable of letting life emerge and keep sustaining it over geological timescales, be it on a planet or on a moon. Finding out what makes a planet living and how to recognize a living planet with our telescopes is the big question.

The past year has been historic in the search for alien life. Thanks to Kepler and other exoplanet surveys we now know that Earth-like planets are so common that many close analogs to our home planet should reside in our neighbourhood of the Galaxy. This makes them amenable to remote sensing exploration with existing technology and telescopes under construction. The search for life is set to begin, but we need to understand better what to look for.

In retiring the habitable zone concept, it makes sense to revert to its original name—circa mid-20th century, as the "liquid water belt"—a region very important to the rich geochemistry of rocky planets. Living planets among them will feel like home.

julia_clarke's picture
John A. Wilson Professor and HHMI Professor, Jackson School of Geosciences, University of Texas at Austin

 

I would like to put to rest the notion that evolution as a process should conform to words and concepts that we find familiar, comfortable, and perhaps even universal. More immediately, I'd like to stop having to always explain whether or not each new feathered dinosaur specimen we discover was a bird.

In many ways it's an understandable question. Most scientists have accepted for years that living birds are one lineage of dinosaurs. The idea that dinosaurs live on in birds even crept into popular consciousness through Jurassic Park. So perhaps it's not surprising that when scientists discover a new feathered dinosaur, people—including scientists and science journalists—often want to know, "Did it fly?" Consider the first-discovered feathered dinosaur, the so-called "Urvogel" Archaeopteryx. Debate continues in the scientific literature: Was it a bird?

As a paleontologist working on the evolution of living birds, I find myself having this exchange over and over again. For example, I describe a small feathery species newly uncovered from the fossil record. After detailing its known features, I might note that it may have had some form of aerial locomotion. There is inevitably a pause. Then the question, "Ok, but was it a bird?" Impatient with scientists and their endless modifiers and complex phrasing, the asker wants to get this story clear, "Ok, but did it fly?" Tell it to me straight.

The questions may sound innocent enough and perhaps are extremely intuitive to ask. However, although these seem like scientific questions, they mostly aren't. They concern primarily what we want to identify as part of the class of entities "birds" and what is part of the class "flighted".  We might think we have these straight in the present day, but try looking back through a dirty lens at life more than 100 million years ago.

Paleontologists must use the shape and form of bone as well as, in rare cases, feathery impressions to track the ecologies of the long dead. To do this, they must use data on form-function relationships in the living. This task in and of itself is difficult and ongoing. But what is more difficult is to translate combinations of structures that are not present in any living species into an understanding of how it moved. For example, flighted living birds have a joint between the scapula and coracoid where the upper arm bone, humerus, meets the pectoral girdle. Yet we have species in the fossil record with feathered forelimbs of impressive span (shall we call them wings?) but lacking this kind of joint articulation. Subtle features of the feathers and their relative proportions may differ from any living bird. Is this creature a bird?

How did it move? Did it have a form of sustained flapping flight but unlike that in any extant species? If we could time travel back to a Cretaceous forest, would we call this movement flight? What if a species beat its wings only briefly to move from branch to branch? What if it utilized these "wing" beats to climb trees or jump? What if it was only volant as a juvenile, but as a large bodied adult it maintained feathery forelimbs for signaling to a mate but flew no longer?

All of these hypotheses have been forwarded, and all may have been true for different denizens of Jurassic and Cretaceous environments. We can debate whether these creatures flew and whether or not they were birds, by our contemporary definitions, but doing so, we risk losing sight of the bigger scientific questions. All too quickly, we can fall into a rabbit hole of defining (and defending) terms, when we'd do better to seek a more precise understanding of the emergence, the relative evolutionary first appearances, of the many features comprising the flight apparatus in living birds.

Feathers make their first appearance in taxa that could not have been volant as adults. Precursors to feathers, simple filaments, are found in tyrannosaurids and an array of other relatives of living birds. While hundreds of characteristics of bone and feather have revealed these deep genealogical relationships within dinosaurs, we still seem to be searching to pin "bird" and "flight" to single characters.

I am not the first to remark that the debate over what to call a bird and what to term flight is not useful and actually at odds with evolutionary thinking. But, I have been surprised by the persistence of this debate even among specialists. For example, exchanges over how to apply the formal taxonomic name "Aves" are ongoing. While events unfolding in deep time via evolutionary processes are arguably the least likely candidates for dichotomous or categorical thinking, this mode of thought runs rampant and engenders false controversies that obscure interesting questions. It is tracking the more complex pattern of asynchronous change in many novel traits that will inform generalities about how the evolution of shape and form may work.

Arguably the hypotheses we investigate should be arrayed relative to one another in relationships other than opposition. However, often the categories we are comfortable talking about will artificially organize them into this apparent relationship. Indeed, across science I would argue we in fact have many "urvogels" lingering evidence of similarly strong collective cognitive investment in the existence of classes of entities we consider intuitive and natural. These can hold us back.

bruce_parker's picture
Visiting Professor, Stevens Institute of Technology; Author, The Power of the Sea: Tsunamis, Storm Surges, and Our Quest to Predict Disasters

Could one really have the nerve to suggest "retiring" the idea of entropy? (I actually do not believe that we abandon old ideas before new ones are developed. Old ideas disappear, or are modified, only when new better ideas are developed. They are never just retired.) So, no, we should not retire entropy, but perhaps treat it with a little less importance, and recognize the paradox it creates.

Entropy, the measure of the degree of disorder in a system, has held a lofty place in physics, being part of a Law no less (not just a theory). The Second Law of Thermodynamics says that in any closed system entropy always increases with time. Unless work is done to prevent it, a closed system will eventually reach maximum entropy and a state of thermal equilibrium. Max Planck believed that entropy (along with energy) was the most important property of physical systems. Sir Arthur Eddington is quoted as saying that "The law that entropy increases—the second law of thermodynamics—holds, I think, the supreme position among the laws of Nature." But as a young physics student in college I must admit I never understood their excitement (and I was not the only student to be unimpressed). The Second Law seemed of minor importance compared to the First Law of Thermodynamics, the conservation of energy—energy could be transformed into different forms, but it was always conserved. The First Law had beautiful partial differential equations (as did all the conservation equations of physics) whose solutions accurately described and predicted so much of the world, and literally changed all our lives. The Second Law was not a conservation equation and had no beautiful partial differential equations. It wasn't even an equality. Has the idea of entropy and the Second Law had any major affect on science and engineering or changed the world?

The Second Law was a statistical "law", initially a generalization of conclusions reached when looking at the motion of molecules/particles. As students it was easy for us to understand the classic example of how hot (fast moving) molecules on one side of a closed box mixed with cold (slowing moving) molecules on the other side, and why they could not separate again once they were together and all at the same temperature. We understood why it was irreversible. And we understood the concept of the "arrow of time".  Sure, the mathematics of the First Law (and the other conservation equations of physics) worked in both directions of time, but with initial conditions and boundary conditions, we always knew which way things moved. It didn't seem to require another Law. In fact, the Second Law (as applied now to all situations) seemed to be an assumption rather than a Law. Especially when it was applied to an entire Universe, that we understand so little about.

When looking at the Universe (whatever that entails, which may be more than our presently visible/observable universe) the First Law tells us that all the energy in the Universe will be conserved, although it may be converted into various forms. But the Second Law says that at some time in the future no more energy transformations can take place. The Universe will reach some stage of maximum entropy and thermal equilibrium. The Second Law essentially says that the Universe must have had a beginning and a end. That is very difficult to accept. The universe must be timeless, for if there was a beginning what was there before this beginning. Something cannot come out of nothing (and by "nothing" I mean the lack of anything, even things we do not know about yet).

Of course, the present Big Bang theory has a beginning (of sorts) and our present form of the universe has apparently expanded out from a singularity, but we do not know what came before that, and oscillating models of the universe are being proposed, so that the Universe is timeless. With such models, if entropy is very high at the end of our universe and was very low at the beginning of our universe, what process could essentially reset entropy to a low value? As relates to an oscillating universe, should entropy perhaps really be conserved somehow? Could there be some type of energy conversion that does not require work (in our classical sense)? Could the Universe actually be the one and only possible perpetual motion machine (forbidden by the Second Law)? If existence is endless in time, it would seem so.

The whole idea of entropy has always felt wrong or misplaced in other ways also. We talk about the Universe going from order to disorder. Yet this supposed order is merely because all the matter of the universe was compressed together in some tiny volume/singularity and when it expanded out there was less order because the particles were more spread out. And yet order is being created all the time.

The greatest result of our expanding and evolving universe is the great and ever increasing complexity that has resulted, first, from gravity condensing matter, then supernova explosions creating higher number elements, then from chemical evolution, and then, most dramatically, from biological evolution (driven by natural selection), culminating in self-reproducing life and eventually the incredible complexity of our brains. Complexity is synonymous with low entropy. The expanding universe has countless small (relative to the size of the universe) pockets of extremely low entropy surrounded by vast areas of higher entropy (much of which resulted from the creation of these low entropy areas). Are the higher orders of complexity (and thus lower orders of entropy) taken into account when trying to balance the entropy of the Universe? There are in fact many scientific papers written today in cosmology trying to sum up the Universe’s total entropy, with formulas that could end up being incrdibly too simple to account for all the (as yet unknown) physics going on in our strange Universe.

We cannot retire entropy, but should we maybe rethink it?

richard_saul_wurman's picture
Founder, TED Conference; EG Conference; TEDMED Conferences; Architect, Cartographer; Author, Information Architects

A wonderful diagram is the approximate theory of the sun centered solar system of Heliocentrism done by Nicolaus Copernicus that he published in 1543.

It would never be published in any academic circles today because it is not correct. The orbits are not circular; they are elliptical, they're not all on the same plane and the diagram is completely disproportionate and does not represent the distances between the planets or from the planets to the sun. It's a diagram of approximation. It's a diagram that give permission for others, as Tycho Brahe released his documentation and his measurements so that Kepler could come up with a more approximate notion of our planetary universe incorporating more accurate geometries.

What I'm suggesting be retired, are the first three words that I wrote above and what I suggest to be embraced is more academic leeway in theories of approximation that give permission for others to see and discover new patterns.

andrew_lih's picture
Associate Professor of Journalism, American University; Author, The Wikipedia Revolution

I do not propose we should do away with the study of change, the area under the curve, or bury Isaac Newton and Gottfried Leibniz. However, for decades now, learning calculus has been the passing requirement for entry into modern fields of study, by combining the rigorous requirements of science, technology, engineering and math. Universities still carry on the tradition that undergraduates are required to take anywhere from one to three semesters of calculus as a pure math discipline. This is typically learning complex math concepts uncontextualized, removed from practical applications and heavily emphasizing proofs and theorems.

Because of this, calculus has become a hazing ritual for those interested in going into one of the most needed fields today: computer science. Calculus has very little relevance to the day-to-day work of many coders, hackers and entreprenuers, yet poses a significant recruiting barrier to fill in sorely needed ranks in today's modern digital workforce. And for what reason?

This is particularly urgent in the area of programming and coding. Undergraduate computer science programs are starting to bounce back from a dearth of enrollment that plagued them in the early Internet era, but it could do a lot more to fill the ranks. Some of this is due to a lingering view that computer science is an extension of mathematics, from an era when computers were primarily crafted as the ultimate calculators. 

Calculus remains in many curricula as more of a rite of passage than for any particular need. It is one way of problem solving and it is a bellwether for the ability to absorb more complex ideas and concepts. But holding it up as a universal obstacle course through which one must pass to program and code is counterproductive, yet the bulk of computer science programs geared towards undergraduate education require it. Leaving in this obtuse math requirement is lazy curricular thinking. It sticks with a model that weeds out people for no good reason related to their ability to program.

This gets us to ask the question: What makes for good programmers? The ability to deconstruct complex problems into a series of smaller, doable ones. A proficiency to think procedurally on systems and structures. The ability to manipulate bits and do amazing things with them. 

If calculus is not a good fit for these, what should replace it? Discrete math, combinatorics, computability, graph theory are far more important than calculus. These are all standard, necessary and immensely relevant fields in most modern computer science programs, but they typically come after the calculus requirement gauntlet.

People are finding other formal and peer-learning methods to pick up coding outside the higher education environment: meetups, code-a-thons, online courses, video tutorials. Moving past the calculus would bring these folks into the fold earlier and more methodically.

Relaxing the calculus requirement does not mean we turn universities into trade schools. We still want our research scientists in training and our Ph.D. candidates in STEM to know and master calculus, linear algebra and differential equations. But for too long, calculus has served as a choke point for training digital-savvy self-starting innovators. 

Clemson University experimented with moving calculus further down the curriculum, not as a prerequisite, but as a class in sync with the need for it in other STEM classes. Its 2004 longitudinal study showed, "a statistically significant improvement in retention in engineering" when it reconfigured its approach to introducing math in later semesters. We need more of these experiments and more radical curricular thinking to get past the same prerequisite model that has dominated the field for decades. Sadly, the structure and administration of academia makes it hard to do this.

How can so many people be interested in coding and programming, yet not be served by our top institutions of higher learning? We have not evolved with the times by treating computer science largely as a STEM discipline, instead of thinking of it as a whole new capability that cuts across every field in academia. The sooner we evolve beyond STEM-oriented thinking, the better.

 

martin_rees's picture
Former President, The Royal Society; Emeritus Professor of Cosmology & Astrophysics, University of Cambridge; Fellow, Trinity College; Author, From Here to Infinity

There's a widely-held presumption that our insight will deepen indefinitely—that all scientific problems will eventually yield to attack. But I think we may need to abandon this optimism. The human intellect may hit the buffers—even though in most fields of science, there's surely a long way to go before this happens.

There is plainly unfinished business in cosmology. Einstein's theory treats space and time as smooth and continuous. We know, however, that no material can be chopped into arbitrarily small pieces: eventually, you get down to discrete atoms. Likewise, space itself has a grainy and "quantised" structure—but on a scale a trillion trillion times smaller. We lack a unified understanding of the bedrock of the physical world.

Such a theory would bring big bangs and multiverses within the remit of rigorous science. But it wouldn't signal the end of discovery. Indeed, it would be irrelevant to the 99 per cent of scientists who are neither particle physicists nor cosmologists.

Our grasp of diet and child care, for instance, is still so meagre that expert advice changes from year to year. This may seem an incongruous contrast with the confidence with which we can discuss galaxies and sub-atomic particles. But biologists are held up by the problems of complexity—and these are more daunting than those of the very big and the very small.

The sciences are sometimes likened to different levels of a tall building: particle physics on the ground floor, then the rest of physics, then chemistry, and so forth: all the way up to psychology (and the economists in the penthouse). There is a corresponding hierarchy of complexity: atoms, molecules, cells, organisms, and so forth. This metaphor is in some ways helpful. It illustrates how each science is pursued independently of the others. But in one key respect the analogy is poor: in a building, insecure foundations imperil the floors above. In contrast, the 'higher level' sciences dealing with complex systems aren't imperiled by an insecure base, as a building is.

Each science has its own distinct concepts and explanations. Even if we had a hypercomputer that could solve Schrodinger's equation for quadrillions of atoms, its output wouldn't yield the kind of understanding that most scientists seek.

This is true not only of the sciences that deal with really complex things—especially those that are alive—but even when the phenomena are more mundane. For instance, mathematicians trying to understand why taps drip, or why waves break, don't care that water is H2O. They treat the fluid as a continuum. They use 'emergent' concepts like viscosity and turbulence.

Nearly all scientists are "reductionists" insofar as they think that everything, however complicated, obeys the basic equations of physics. But even if we had a hypercomputer that could solve Schrodinger's equation for the immense aggregate of atoms in (say) breaking waves, migrating birds or tropical forests, an atomic-level explanation wouldn't yield the enlightenment we really seek. The brain is an assemblage of cells, and a painting is an assemblage of chemical pigment. But in both cases, what's interesting is the pattern and structure—the emergent complexity.

We humans haven't changed much since our remote ancestors roamed the African savannah. Our brains evolved to cope with the human-scale environment. So it is surely remarkable that we can make sense of phenomena that confound everyday intuition: in particular, the minuscule atoms we're made of, and the vast cosmos that surrounds us.

Nonetheless—and here I'm sticking my neck out—maybe some aspects of reality are intrinsically beyond us, in that their comprehension would require some post-human intellect—just as Euclidean geometry is beyond non-human primates.

 Some may contest this by pointing out that there is no limit to what is computable. But being computable isn't the same as being conceptually graspable. To give a trivial example, anyone who has learnt Cartesian geometry can readily visualize a simple pattern—a line or a circle—when they're given the equation for it. But nobody given the (simple seeming) algorithm for drawing the Mandelbrot Set could visualise its amazing intricacies– even though drawing the pattern is only a modest task for a computer.

It would be unduly anthropocentric to believe that all of science—and a proper concept of all aspects of reality—is within human mental powers to grasp. Whether the really long-range future lies with organic post-humans or with intelligent machines is a matter for debate—but either way, there will be insights into reality left for them to discover.

alexander_wissner_gross's picture
Scientist; Inventor; Entrepreneur; Investor

Since long before Erwin Schrödinger's seminal 1944 work, "What Is Life?", physicists have aspired to rigorously define the characteristics that distinguish some matter as living and other matter as not. However, the analogous task of identifying the universally distinguishing physical properties of intelligence has remained largely underappreciated. 

Based on recent discoveries, I have now come to suspect that the reason for this lack of progress in physically defining intelligence is due to the entire scientific concept of treating intelligence as a static property—rather than a dynamical process—being ready for retirement.

In particular, recent results have shown that an extremely rudimentary physical process called causal entropic forcing is able to replicate model versions of signature cognitive adaptive behaviors seen previously only in humans and certain non-human animal intelligence tests. These findings collectively suggest that a variety of key characteristics associated with human intelligence, including upright walking, tool use, and social cooperation, should instead be viewed as side effects of a deeper dynamical process that attempts to maximize future freedom of action. This freedom-maximizing process can only be meaningfully said to exist over an extended time period, and as such, is not a static property.

It's time we retired studying intelligence as a property.

kathryn_clancy's picture
Assistant Professor of Anthropology, University of Illinois, Urbana-Champaign; Writer

Last year, I spearheaded a survey and interview research project on the experiences of scientists at field sites. Over sixty percent of the respondents had been sexually harassed, and twenty percent had been sexually assaulted. Sexual predation was only the beginning of what I and my colleagues uncovered: study respondents reported psychological and physical abuses, like being forced to work late into the day without being told when they could head back to camp, not being allowed to urinate, verbal threats and bullying, and being denied food. The majority of perpetrators are fellow scientists senior to the target of abuse, the target themselves usually a female graduate student. Since we started analyzing these data, I haven’t been able to read a single empirical science paper without wondering on whose backs, via whose exploitation, that research was conducted.

When the payoff is millions of dollars of research money, New York Times coverage, Nobel Prizes or even just tenure, we often seem willing to pay any price for scientific discovery and innovation. This is exactly the idea that needs to be retired—that science should be privileged over scientists.

Putting ideas above people is a particularly idealistic way of viewing the scientific enterprise. This view assumes that the field of science is not only meritocratic but that who a scientist is, or where they come from, plays no role in their level of success. Yet it is well known that class, occupational, and educational attainment vary by race, gender, and many other aspects of human diversity, and that these factors do influence who chooses and who stays in science. As unadulterated as we may want to envision science, the scientific enterprise is run by people, and people often run on implicit bias. I know scientists know these things—scientists wrote the papers to which I refer—but I'm not sure we have all internalized the implications. The implications for implicit bias and workplace diversity are that social structure and identity motivate interactions between workers, increasing the chances for exploitation in terms of both overwork and harassment particularly for those who are junior or underrepresented.

Scientists are not blind to the problems of the ways we culturally conceive of scientific work. There are increasing discussions among scientists of the ever-elusive work/life balance. By and large these conversations center around personal ways we can create a better life for ourselves through management of our time and priorities. To my mind, these conversations are a luxury for those who have already survived the gauntlet of being a trainee scientist. But there are few ways to consider or improve work/life balance when you are one of the grunts on the lab floor or fossil dig.

Overwork and exploitation do not lead to scientific advancement nearly as effectively as humane, equitable and respectful workplaces. For instance, recent social relations modeling research reveals that when women are integrated rather than peripheral members of their laboratory group, those labs publish more papers. Further, years of research on counterproductive work behaviors demonstrates that when you create strongly enforced policies and independent lines of reporting, work environments improve and workers are more productive. The hassled, overworked, give-it-all-for-the-job mentality in science is not empirically supported to produce the best work.

The lives of scientists need to be prioritized over scientific discovery in the interest of actually doing better science. I know many of us operate on fear—fear of being scooped, fear of not getting tenure, fear of not having enough funding to do our work, fear even of being exploited ourselves. But we cannot let fear motivate a scheme that crushes potential bright future scientists. The criteria for scholarly excellence should not be based on who survives or evades poor treatment but who has the intellectual chops to make the most meaningful contributions. Thus, trainees need unions and institutional policies to protect them, and senior scientists to enact cultural change. An inclusive, humane workplace is actually the one that will lead to the most rigorous, world-changing scientific discoveries.

kiley_hamlin's picture
Assistant Professor and Canada Research Chair in Developmental Psychology, University of British Columbia

There is a persistent belief in our society that morality is acquired slowly and at considerable effort after birth. That is, it is common to view young children as moral "blank slates," beginning life with no real moral leanings of any kind. On this viewpoint, children first encounter the moral world in person, via their own experiences and observations. Children then actively (or passively, but fewer scholars believe this today) combine such experiences and observations with advances in impulse control, perspective taking, and complex reasoning, allowing them to become more and more "moral" over time.

I think that moral blank slate-ism should be retired. First, though it works well with a picture of infants as "blooming, buzzing confusions" and of toddlers as selfish egoists, developmental psychological research from (at least) the last decade suggests that neither picture is true. For instance, by 3 months of age infants can already process prosocial and antisocial interactions between unknown third parties, preferring those who help, versus hinder, someone to achieve a goal. Indeed, after viewing such interactions, 3-month-olds show a highly reliable tendency to look at the Helper over the Hinderer, and 4.5-month-olds (who can reach) show the same tendency to reach for Helpers. Most strikingly, infants' preferences do not seem to reflect simply preferring those who make good things happen (what we might call an "outcome bias"): in the first year infants prefer those who harm (not help) those who've hindered others before, those with helpful intentions, even if the outcomes they cause are bad.

Rather than selfish egoists, all kinds of prosocial behaviors begin in infancy-helping, sharing, informing, etc. Though these behaviors may result from intensive early socialization, research suggests that infants and toddlers are internally, rather than externally, motivated to be prosocial. For instance, infants help and give without being prompted, and toddlers will actually choose to help over doing other (really) fun things. These behaviors may result from different emotional states: toddlers are negatively aroused by seeing others in need, whereas they find helping others (even at a cost to themselves) emotionally rewarding.

The second reason that I think moral blank slate-ism should be retired is that (because morality is born from experience) it leads us to attribute differences in moral outcomes to differences in experience. This leads to the notion that all of us can be led to be appropriately moral, given the right (and none of the wrong) inputs. Moral failings, then, result from flawed inputs.

Obviously experience plays a critical role in moral development: countless studies indicate some causal relationship between experiences relevant to morality (parenting styles, observed violence, abuse, etc.) and moral outcomes. But consider Dylan Klebold and Eric Harris, shooters at Columbine High School in 1999. They were just the first two of what is now a painfully long list of mass murders of children, by children, in North America. After Columbine, people said that Dylan and Eric played too many violent video games, were bullied in school, or even that their parents hadn't bothered to teach them right from wrong. The first two things certainly happened (probably not the third); but the rate of video game playing and bullying in children is extremely high. What about the 99.9999% of children today who do NOT shoot up their schools? What was different about Eric and Dylan?

Eric was a psychopath. Psychopaths are extremely low on empathy, and (perhaps) as a result, don't mind killing people for fun—the rate of psychopaths in the population of murderers is much higher than average. Psychopathy is a developmental disorder, and is considered one of the least treatable of the mental illnesses. Curiously, it is also one of the latest diagnosed - typically not until adolescence or adulthood. Since we know interventions need to be started early to be effective (think of recent gains in autism treatment from earlier diagnosis), it's perhaps a given that a late-diagnosed disorder would not be susceptible to intervention. My worry, in a nutshell, is that moral blank slate-ism's focus on experience makes us reluctant to identify enduring, temperament-based predictors of antisociality in our children, and when we do it's too late to treat them. It is not that I don't share the reluctance to "pigeonhole" kids—but it is curious that blaming individual differences on varied experience may be preventing us from using experience to level the playing field through intervention.

Other studies show a link between very early measures of empathy and antisocial behavior later in life within the typically-developing population. These measures usually involve someone acting distressed in front of the infant, and determining whether the infant looks at him/her with concern/distress or not. Most infants do, most of the time. A recent study found that non-abused 6-14-month-olds who showed "disregard" for others' distress were significantly more likely to be antisocial as adolescents. This result suggests that, outside of psychopathy per se, warning signs for antisocial behavior may emerge extremely early, before experience could have played much of a role.

Again, experience matters. Several studies have now documented that experience may influence moral outcomes via a "gene-environment interaction." That is, rather than a simple equation in which, say, adverse experiences lead to antisocial children: [child + abuse – ameliorating experiences = violence], the relationship between abuse and antisocial behavior is only observed in children with particular versions of various genes known to regulate certain social hormones. That is, whether they have been abused or not, children with the "safe" gene alleles are all about equally (un)likely to engage in antisocial behavior. Children with the "at risk" alleles, on the other hand, are more susceptible to the damages of abuse.

To close, I think the common view of infants as moral blank slates has led to a mistaken view of the infant and how moral behavior and cognition work. To the extent that understanding how moral development begins, and understanding all of the causes of individual differences, makes us better equipped to address various moral-developmental paths, I think moral blank slate-ism should be retired.

 

brian_christian's picture
Author, The Most Human Human; Co-author (with Tom Griffiths), The Alignment Problem

In my view, what's most outmoded within science, most badly in need of retirement, is the way we structure and organize scientific knowledge itself. Academic literature, even as it moves online, is a relic of the era of typesetting, modeled on static, irrevocable, toothpaste-out-of-the-tube publication. Just as the software industry has moved from a "waterfall" process to an "agile" process—from monolithic releases shipped from warehouses of mass-produced disks to over-the-air differential updates—so must academic publishing move from its current read-only model and embrace a process as dynamic, up-to-date, and collaborative as science itself.

It amazes me how poorly the academic and scientific literature is configured to handle even retraction, even at its most clear-cut—to say nothing of subtler species like revision. It is typical, for example, that even when the journal editors and the authors fully retract a paper, the paper continues to be available at the journal's website, amazingly, without any indication that a retraction exists elsewhere, let alone on the same site, penned by the same authors and vetted by the same editor. (Imagine, for instance, if the FDA allowed a drug maker to continue manufacturing a drug known to be harmful, so long as they also manufactured a warning label—but were under no obligation to put the label on the drug.)

A subtler question is how and in what manner ("caveat lector"?) to flag studies that depend on the discredited study—let alone studies that depend on those studies.

Citation is the obvious first answer, though it's not quite enough. In academic journals, all citations attest to the significance of the works they cite, regardless of whether their results are being presumed, strengthened or challenged; even theories used as punching bags, for example, are accorded the respect of being worthy or significant punching bags.

But academic literature makes no distinction between citations merely considered significant and ones additionally considered true. What academic literature needs goes deeper than the view of citations as kudos and shout-outs. It needs what software engineers have used for decades: dependency management.

A dependency graph would tell us, at a click, which of the pillars of scientific theory are truly load-bearing. And it would tell us, at a click, which other ideas are likely to get swept away with the rubble of a particular theory. An academic publisher worth their salt would, for instance, not only be able to flag articles that have been retracted—that this is not currently standard practice is, again, inexcusable—but would be able to flag articles that depend in some meaningful way on the results of retracted work.

An academic publisher worth their salt would also accommodate another pillar of modern software development: revision control. Code repositories, like wikis, are living documents, open not only for scrutiny, censure and approbation, but for modification.

In a revision control system like Git (and its wildly successful open-source community on GitHub), users can create "issues" that flag problems and require the author's response, they can create "pull requests" that propose answers and alterations, and they can "fork" a repository if they want to steward their own version of the project and take it in a different direction. (Sometimes forked repositories serve a niche audience; sometimes they wither from neglect or disuse; sometimes they fully steal the audience and userbase from the original; sometimes the two continue to exist in parallel or continue to diverge; and sometimes they are reconciled and reunited downstream.) A Git repository is the best of top-down and bottom-up, of dictatorship and democracy: its leaders set the purpose and vision, have ultimate control and final say—yet any citizen has an equal right to complain, propose reform, start a revolt, or simply pack their bags and found a new nation next door.

The "Accept," "Reject," and "Revise and Resubmit" ternary is anachronistic, a relic of the era of metal type. Even peer review itself, with its anonymity and bureaucracy, may be ripe for reimagining. The behind-closed-doors, anonymous review process might be replaced, for instance, with something closer to a "beta" period. The article need not be held up for months—at least, not from other researchers—while it is considered by a select few. One's critics need not be able to clandestinely delay one's work by months. Authors need not thank "anonymous readers who spotted errors and provided critical feedback" when those readers' corrections are directly incorporated (with attribution) as differential edits. Those readers need not offer their suggestions as an act of obligation or charity, and they need not go unknown.

Some current rumblings of revolution seem promising. Wide circulation among academics of "working papers" challenges the embargo and lag in the peer review process. PLOS ONE insists on top-down quality assurance, but lets importance emerge from the bottom-up. Cornell's arXiv project offers a promising alternative to more traditional journal models, including versioning (and its "endorsement" system has since 2004 suggested a possible alternative to traditional peer reviews). However, its interface by design limits its participatory and collaborative potential.

On that front, a massive international collaboration via the Polymath Project website in 2013 successfully extended the work of Yitan Zhang on the Twin Primes Conjecture (and I understand that the University of Montreal's James Maynard has subsequently gone even further). Amazingly, this groundbreaking collaborative work was done primarily in a comment thread.

The field is crying out for better tools; meanwhile better tools already exist in the adjacent field of software development.

It is time for science to go agile.

The scientific literature, taken as content, is stronger than it's ever been—as, of course, it should be. As a form, the scientific literature has never been more inadequate or inept. What is in most dire need of revision is revision itself.

 

amanda_gefter's picture
Science writer; Author, Trespassing on Einstein's Lawn

Physics has a time-honored tradition of laughing in the face of our most basic intuitions. Einstein's relativity forced us to retire our notions of absolute space and time, while quantum mechanics forced us to retire our notions of pretty much everything else. Still, one stubborn idea has stood steadfast through it all: the universe.

Sure, our picture of the universe has evolved over the years—its history dynamic, its origin inflating, its expansion accelerating. It has even been downgraded to just one in a multiverse of infinite universes forever divided by event horizons. But still we've clung to the belief that here, as residents in the Milky Way, we all live in a single spacetime, our shared corner of the cosmos—our universe.

In recent years, however, the concept of a single, shared spacetime has sent physics spiraling into paradox. The first sign that something was amiss came from Stephen Hawking's landmark work in the 1970s showing that black holes radiate and evaporate, disappearing from the universe and purportedly taking some quantum information with them. Quantum mechanics, however, is predicated upon the principle that information can never be lost.

Here was the conundrum. Once information falls into a black hole, it can't climb back out without traveling faster than light and violating relativity. Therefore, the only way to save it is to show that it never fell into the black hole in the first place. From the point of view of an accelerated observer who remains outside the black hole, that's not hard to do. Thanks to relativistic effects, from his vantage point, the information stretches and slows as it approaches the black hole, then burns to scrambled ash in the heat of the Hawking radiation before it ever crosses the horizon. It's a different story, however, for the inertial, infalling observer, who plunges into the black hole, passing through the horizon without noticing any weird relativistic effects or Hawking radiation, courtesy of Einstein's equivalence principle. For him, information better fall into the black hole, or relativity is in trouble. In other words, in order to uphold all the laws of physics, one copy of the bit of information has to remain outside the black hole while its clone falls inside. Oh, and one last thing—quantum mechanics forbids cloning.

Leonard Susskind eventually solved the information paradox by insisting that we restrict our description of the world to either the region of spacetime outside the black hole's horizon or to the interior of the black hole. Either one is consistent—it's only when you talk about both that you violate the laws of physics. This "horizon complementarity," as it became known, tells us that the inside and outside of the black hole are not part and parcel of a single universe. They are two universes, but not in the same breath.

Horizon complementarity kept paradox at bay until last year, when the physics community was shaken up by a new conundrum more harrowing still— the so-called firewall paradox. Here, our two observers find themselves with contradictory quantum descriptions of a single bit of information, but now the contradiction occurs while both observers are still outside the horizon, before the inertial observer falls in. That is, it occurs while they're still supposedly in the same universe.

Physicists are beginning to think that the best solution to the firewall paradox may be to adopt "strong complementarity"—that is, to restrict our descriptions not merely to spacetime regions separated by horizons, but to the reference frames of individual observers, wherever they are. As if each observer has his or her own universe.

Ordinary horizon complementarity had already undermined the possibility of a multiverse. If you violate physics by describing two regions separated by a horizon, imagine what happens when you describe infinite regions separated by infinite horizons! Now, strong complementarity is undermining the possibility of a single, shared universe. On glance, you'd think it would create its own kind of multiverse, but it doesn't. Yes, there are multiple observers, and yes, any observer's universe is as good as any other. But if you want to stay on the right side of the laws of physics, you can only talk about one at a time. Which means, really, that only one exists at a time. It's cosmic solipsism.

Sending the universe into early retirement is a pretty radical move, so it better buy us something pretty in the way of scientific advancement. I think it does. For one, it might shed some light on the disconcerting low quadrupole coincidence—the fact that the cosmic microwave background radiation shows no temperature fluctuations at scales larger than 60 degrees on the sky, capping the size of space at precisely the size of our observable universe – as if reality abruptly stops at the edge of an observer's reference frame.

More importantly, it could offer us a better conceptual grasp of quantum mechanics. Quantum mechanics defies understanding because it allows things to hover in superpositions of mutually exclusive states, like when a photon goes through this slit and that slit, or when a cat is simultaneously dead and alive. It balks at our Boolean logic, it laughs at the law of the excluded middle. Worse, when we actually observe something, the superposition vanishes and a single reality miraculously unfurls.

In light of the universe's retirement, this all looks slightly less miraculous. After all, superpositions are really superpositions of reference frames. In any single reference frame, an animal's vitals are well defined. Cats are only alive and dead when you try to piece together multiple frames under the false assumption that they're all part of the same universe.

Finally, the universe's retirement might offer some guidance as physicists push forward with the program of quantum gravity. For instance, if each observer has his or her own universe, then each observer has his or her own Hilbert space, his or her own cosmic horizon and his or her own version of holography, in which case what we need from a theory of quantum gravity is a set of consistency conditions that can relate what different observers can operationally measure.

Adjusting our intuitions and adapting to the strange truths uncovered by physics is never easy. But we may just have to come around to the notion that there's my universe, and there's your universe—but there's no such thing as the universe.

gary_marcus's picture
Professor of Psychology, Director NYU Center for Language and Music; Author, Guitar Zero

No, I don't literally mean that we should stop believing in, or collecting, Big Data. But we should stop pretending that Big Data is magic. There are few fields that wouldn't benefit from large, carefully collected data sets. But lots of people, even scientists, put more stock in Big Data than they really should. Sometimes it seems like half the talk about understanding science these days, from physics to neuroscience, is about Big Data, and associated tools like "dimensionality reduction", "neural networks", "machine learning algorithms" and "information visualization".

Big Data is, without a doubt, the idea of the moment. 39 minutes ago (according to the Big Data that drive Google News), Gordon Moore (for whom Moore's law is named) "Gave Big to Big Data", MIT debuted an online course for Big Data (44 minutes ago), and Big Data was voted strategy+businesses' Strategy of the year. Forbes had an article about Big Data a few hours before that. There were 163,000 hits for a search for big+data+science.

But science still revolves, most fundamentally, around a search of the laws that describe our universe. And the one thing that Big Data isn't particularly good at is, well, identifying laws. Big Data is brilliant at detecting correlation; the more robust your data set, the better chance you have of identifying correlations, even complex ones involving multiple variables. But correlation never was causation, and never will be. All the big data in the world by itself won’t tell you whether smoking causes lung cancer. To really understand the relation between smoking and cancer, you need to run experiments, and develop mechanistic understandings of things like carcinogens, oncogenes, and DNA replication. Merely tabulating a massive database of every smoker and nonsmoker in every city in the world, with every detail about when they smoked, where they smoked, how long they lived, and how they died would not, no matter how many terabytes it occupied, be enough to induce all the complex underlying biological machinery.

If it makes me nervous when people in the business world put too much faith in Big Data, it makes me even more nervous to see scientists do the same. Certain corners of neuroscience have taken on an "if we build it, they will come" attitude, presuming that neuroscience will sort itself out as soon as we have enough data.

It won't. If we have good hypotheses, we can test them with Big Data, but Big Data shouldn't be our first port of call; it should be where we go once we know what are looking for.  

paul_saffo's picture
Technology Forecaster; Consulting Associate Professor, Stanford University

Nature and nature's laws lay hid in night;
God said "Let Newton be" and all was light. 

—Alexander Pope

The breathtaking advance of scientific discovery has the unknown on the run. Not so long ago, the Creation was 8,000 years old and Heaven hovered a few thousand miles above our heads. Now Earth is 4.5 billion years old and the observable Universe spans 92 billion light years. Pick any scientific field and the story is the same, with new discoveries—and new life-touching wonders—arriving almost daily. Like Pope, we marvel at how hidden Nature is revealed in scientific light.

Our growing corpus of scientific knowledge evokes Teilhard de Chardin's arresting metaphor of the noosphere, the growing sphere of human understanding and thought. In our optimism, this sphere is like an expanding bubble of light in the darkness of ignorance.

Our optimism leads us to focus on the contents of this sphere, but its surface is more important for it is where knowledge ends and mystery begins. As our scientific knowledge expands, contact with the unknown grows as well. The result is not merely that we have mastered more knowledge (the sphere's volume), but we have encountered an ever-expanding body of previously unimaginable mysteries. A century ago, astronomers wondered whether our galaxy constituted the entire universe; now they tell us we probably live in an archipelago of universes.

The science establishment justifies its existence with the big idea that it offers answers and ultimately solutions. But privately, every scientist knows that what science really does is discover the profundity of our ignorance. The growing sphere of scientific knowledge is not Pope's night-dispelling light, but a campfire glow in the gloom of vast mystery. Touting discoveries helps secure finding and gain tenure, put perhaps the time has come to retire discovery as the ultimate measure of scientific progress. Let us measure progress not by what is discovered, but rather by the growing list of mysteries that remind us of how little we really know.

joel_gold's picture
Psychiatrist; Clinical Associate Professor of Psychiatry, NYU School of Medicine; Coauthor (with Ian Gold), Suspicious Minds
ian_gold's picture
Neuroscientist; Canada Research Chair in Philosophy & Psychiatry, McGill University; Coauthor (with Joel Gold), Suspicious Minds

In 1845, Wilhelm Griesinger, author of the most important textbook of psychiatry of the day, wrote: “what organ must necessarily and invariably be diseased where there is madness? … Physiological and pathological facts show us that this organ can only be the brain…” Griesinger’s truism is regularly reiterated in our own time because it expresses the basic commitment of contemporary biological psychiatry.

The logic of Griesinger’s argument seems unassailable: severe mental illness has to originate in a physiological abnormality of some part of the body, and the only plausible candidate location is the brain. Since the mind is nothing over and above the activity of the brain, the disordered mind is nothing more than a disordered brain. True enough. But that is not to say that mental disorders can, or will, be described by genetics and neurobiology. Here’s an analogy. Earthquakes are nothing over and above the movements of a vast number of atoms in space, but the theory of earthquakes says nothing at all about atoms but only about tectonic plates. The best scientific explanation of a phenomenon depends on where real human beings find comprehensible patterns in the universe, and not how the universe is constituted. God may understand earthquakes and mental illness in terms of atoms, but we may not have the time or the intelligence to do so.

It’s not a radical idea that understanding and treating brain disorders sometimes has to move outside the skull. A man's heart hurls an embolus into his brain. He might now be unable to produce or understand speech, move one half of his body, or see half of the world in front of him. He has had a stroke and his brain is now damaged. The cause of his brain illness did not originate there, but in his heart. His physicians will do what they can to limit further damage to his brain tissue and perhaps even restore some of the function lost due to the embolism. But they will also try to diagnose and treat his cardiovascular disease. Is he in atrial fibrillation? Is his mitral valve prolapsed? Does he require blood thinner? And they won't stop there. They will want to know about the patient's diet, exercise regimen, cholesterol level and any family history of heart disease.

Severe mental illness is also an assault on the brain. But like the embolus it may sometimes originate outside the brain. Indeed, psychiatric research has already given us clues suggesting that a good theory of mental illness will need concepts that make reference to things outside the skull. Psychosis provides a good example. A family of disorders, psychosis is marked by hallucinations and delusions. The central form of psychosis, schizophrenia, is the psychiatric brain disease par excellence. But schizophrenia interacts with the outside world, in particular, the social world. Decades of research has given us robust evidence that the risk of developing schizophrenia goes up with experience of childhood adversity, like abuse and bullying. Immigrants are at about twice the risk, as are their children. And the risk of illness increases in a near-linear fashion with the population of your city and varies with the social features of neighborhoods. Stable, socially coherent neighborhoods have a lower incidence than neighborhoods that are more transient and less cohesive. We don’t yet understand what it is about these social phenomena that interacts with schizophrenia, but there is good reason to think they are genuinely social.

Unfortunately, these environmental determinants of psychosis go largely ignored, but they provide opportunities for useful interventions. We don’t yet have a genetic therapy for schizophrenia, and antipsychotic drugs can only be used after the fact and are not nearly as good as we’d like them to be. The Decade of the Brain produced a great deal of important research into brain function, and the new BRAIN initiative will do so as well. But almost none of it has yet (or is likely) to help the patients who suffer from mental illness or those who treat them. But reducing child abuse, and improving the quality of the urban environment might very well prevent some people from ever developing a psychotic illness at all.

Of course, whatever it is about the social determinants of psychosis that makes them risk factors, they must have some downstream effect on the brain otherwise they would not raise the risk of schizophrenia, but they themselves are not neural phenomena any more than smoking is a biological phenomenon because it is a cause of lung cancer. The theory of schizophrenia will have to be more expansive, therefore, than the theory of the brain and its disorders.

That a theory of mental illness should make reference to the world outside the brain is no more surprising than that the theory of cancer has to make reference to cigarette smoke. And yet what is commonplace in cancer research is radical in psychiatry. The time has come to expand the biological model of psychiatric disorder to include the context in which the brain functions. In understanding, preventing and treating mental illness, we will rightly continue to look into the neurons and DNA of the afflicted and unafflicted. To ignore the world around them would be not only bad medicine but bad science.

susan_fiske's picture
Eugene Higgins Professor, Department of Psychology, Princeton University

The idea that people operate mainly in the service of narrow self-interest is already moribund, as social psychology and behavioral economics have shown. We now know that people are not rational actors, instead often operating on automatic, based on bias, or happy with hunches. Still, it's not enough to make us smarter robots, or to accept that we are flawed. The rational actor's corollary—all we need is to show more competence—also needs to be laid to rest. Even regular people who are not classical economists sometimes think that sheer cut-throat competence would be enough—on the job, in the marketplace, in school, and even at home.

Talent and problem-solving ability indeed are crucial, of course. But there's more.

We are social beings, embedded in a human environment even more than in a natural or a constructed one. If other people are our ecological niche, then we need to understand how to live amongst them. We do this by figuring out two things about them, not only how good they will be at getting where they want to go, but also: Where are they trying to go?

People are a miracle of self-propelled agency. Not for nothing are humans attuned to each other’s intentions.  We need—and our ancestors needed—to know whether others have friendly or hostile intentions toward us. In my world, we call this a person’s warmth, and others have called it trustworthiness, morality, communality, or worthy intentions.

People are most effective in social life if we are—and show ourselves to be—both warm and competent. This is not to say that we always get it right, but the intent and the effort must be there. This is also not to say that love is enough, because we do have to prove capable to act on our worthy intentions. The warmth-competence combination supports both short-term cooperation and long-term loyalty. In the end, it's time to recognize that people survive and thrive with both heart and mind.

fiery_cushman's picture
Assistant Professor, Department of Psychology, Harvard University

Many scientists are seduced by a two-step path to success: First identify a big effect and then find the explanation for it. Although not often discussed, there is an implicit theory behind this approach. The theory is that big effects have big explanations. This is critical because scientists are interested in the explanations, not in the effects—Newton is famous not for showing that apples fall, but for explaining why. So, if the implicit theory is wrong, then a lot of people are barking up the wrong trees.

There is, of course, an alternative and very plausible source of big effects: Many small explanations interacting. As it happens, this alternative is worse than the wrong tree—it's a near-hopeless tree. The wrong tree would simply yield a disappointingly small explanation. But the hopeless tree has so many explanations tangled in knotted branches that extraordinary effort is required to obtain any fruit at all.

So, do big effects tend to have big explanations, or many explanations? There is probably no single, simple and uniformly correct answer to this question. (It's a hopeless tree!) But, we can use a simple model to help make an educated guess.

Suppose that the world is composed of three kinds of things. There are levers we can pull. Pulling these levers cause observable effects: Lights flash, bells ring, and apples fall. Finally, there is a hidden layer of causal forces—the explanations—that connect the levers to their effects.

In order to explore this toy world I simulated it on my laptop. First, I created one thousand levers. Each lever activated between one and five hidden mechanisms (200 levers activated just one mechanism each, another 200 activated two, etc.). In my simulation, each mechanism was simply a number drawn from a normal distribution with a mean of zero. Then, the hidden mechanisms activated by each lever were summed to produce an observable effect. So, 200 of the levers produced effects equal to a single number drawn from a normal distribution, another 200 levers produced effects equal to the sum of two such numbers, and so forth.

After this was done, I had a list of 1,000 effects of varying size. Some were large (very negative, or very positive), while others were small (close to zero). First I looked at the 50 smallest effects, curious to see how many of them resulted from a single, isolated mechanism: 11 out of 50. Then I checked how many of them were the result of five mechanisms, summed together: 6 out of 50. On the whole, the very smallest effects tended to have fewer explanations.

Next I looked at the 50 largest effects. These effects were much larger—about 100 times larger, on average. But they also tended to have many more explanations. Among those 50 largest effects, 25 of them had five explanations, but not even one of them had a single explanation. The first such single-explanation-effect was ranked 103 in size. (These examples help to make my point tangible, but its essence can be captured more succinctly: The standard deviation of the sum of two uncorrelated random variables is greater than the standard deviation of either individually).

So, if a scientist's exclusive goal were simplicity, then in my toy world she ought to avoid the very biggest effects and instead pursue the smallest ones. Yet, she might feel cheated because this method would only identify explanations of tremendously little influence. As a crude method of balancing simplicity (few explanations) against influence (big explanations), I computed a sort of "expected value" of experimentation for different effect sizes: The probability of finding a one-cause-effect, multiplied by the size of the effect in question. As you might guess, the highest expected values tend to fall towards the middle of the range of effect sizes. Balance, it seems, finds a soul mate in modesty.

Now, there are some caveats to my back of the envelope calculations. Most scientists are capable of working out causal mechanisms that have more than one dimension. (Some can even handle five!) Also, the actual causal mechanisms that scientists investigate are far more complicated than my model allows for. One explanation may be related to many effects, multiple explanations combine with each other nonlinearly, explanations may be correlated, and so forth.

Still, there is value in retiring the implicit theory that we should pursue the largest effects most doggedly. I suspect that every scientist has her own a favorite example of the perils of this theory. In my field, lakes of ink have been spilled attempting to find "the" explanation for why people consider it acceptable to redirect a speeding trolley away from five people and towards one, but not acceptable to hurl one person in front of a trolley in order to stop it from hitting five. This case is alluring because the effect is huge and its explanation is not all obvious. With the benefit of hindsight, however, there is considerable agreement that it does not have just one explanation. In fact, we have tended to learn more from studying much smaller effects with a key benefit: a sole cause.

It is natural to praise research that delivers large effects and the theories that purport to explain them. And this praise is often justified—not least because the world has large problems that demand ambitious scientific solutions. Yet science can advance only at the rate of its best explanations. Often, the most elegant ones are clothed around effects of modest proportions.

jamil_zaki's picture
Assistant Professor of Psychology, Stanford University

Human beings are the unequivocal world champions of niceness. We act kindly not only towards people who belong to our own social groups or can reciprocate our generosity, but also towards strangers thousands of miles away who will never know we helped them. All around the world, people sacrifice their resources, well being, and even their lives in the service of others.

For behavioral scientists, the great and terrible thing about altruism—behavior that helps others at a cost to the helper—is its inherent contradictions. Prosocial behaviors appear to contradict economic and evolutionary axioms about how humans should behave: selfishly, nasty and brutish, red in tooth and claw, or whichever catchphrase you prefer. After all, how could organisms that sacrifice for others survive, and why would nature endow us with such self-defeating tendencies?

In recent decades, researchers have largely solved this problem, offering reasons that perfectly self-oriented organisms might behave altruistically. Solving the "altruism paradox" becomes trivial when individuals help family members (thus advancing helpers' genes) or others who can reciprocate (increasing helpers' chances of future gains) or help others in public (enhancing helpers' reputation). We see these motives at work all around us, in parenting, favors for bosses, and opera patrons donating just enough to get their names on the "gold donor" plaques in theater lobbies.

More recently, my colleagues and I, as well as other neuroscientists, uncovered another "selfish"motive for altruism: helping others simply feels good. Giving to others engages brain structures associated with reward and motivation, similar to those that come online when people see beautiful faces, win money, or eat chocolate. Further, "reward-related" brain activity associated with helping track people's willingness to act generously. This doesn't mean that altruism is the psychological equivalent of Ben and Jerry's, but it does provide converging evidence for James Andreoni's idea that generosity produces a hedonic "warm glow."

One common response I receive when presenting this work has grown increasingly bothersome. Often, an audience member will claim that if people experience helping as rewarding, then their actions are not "really" altruistic at all. The claim as I understand it traces back to the Kantian notion—embedded in the "cost to the helper" section of altruism's definition—that virtuous action is motivated by principle alone, and that cashing in on that action, whether through material gain or psychological pleasure, disqualifies it as being virtuous. Oftentimes, this contention devolves into long, animated, and (to my mind) useless attempts to find space for true altruism amid an avalanche of ulterior motives.

This altruism hierarchy, with a near-mystic "true" altruism residing somewhere in the distance and our sullied attempts at it crowding real life, is widespread. It also plays out directly in people’s judgment. For instance, a study published yesterday by George Newman and Daylain Cain demonstrated that people judge people as less moral when they act altruistically and gain in the process, than when they gain from clearly non-altruistic behavior. In essence, people view "tainted altruism" as worse than no altruism at all. 

I think the altruism hierarchy should be retired. I do believe that people often help others absent the goal of any personal gain. Dan Batson, Philip Kitcher, and others have done the philosophical and empirical work of distinguishing other-oriented and self-oriented motives for prosociality. But I also believe that the reservation of terms such as "pure" or "real" for actions bereft of any personal gain is less than useful.

This is for two reasons, which both connect with the broader idea of self-negation. First, the altruism hierarchy is logically self-negating. Attempts to identify true altruism often boil down to redacting motivation from behavior altogether. The story goes that in order to be pure, helping others must dissociate from personal desire (to kiss up, look good, feel rewarded, and so forth). But it is logically fallacious to think of any human behavior as amotivated. De facto, when people engage in actions, it is because they want to. This could represent an overt desire to gain personally, but could also stem from previous learning (for instance, that helping others in the past has felt good or provided personal gain) that translates into an intuitive prosocial preference. Disqualifying self-motivated behavior from being altruistic obscures the universality of motivation in producing all behavior, generous or not.

Second, the altruism hierarchy is morally self-negating. It often appears to me that critics of "impure" altruism chide helpers for acting in human ways, for instance by doing things that feel good. The ideal, then, seems to entail acting altruistically while not enjoying those actions one bit. To me, this is no ideal at all. I think it's profound and downright beautiful to think that our core emotional makeup can be tuned towards others, causing us to feel good when we do. Color me selfish, but I'd take that impure altruism over a de-enervated, floating ideal any day.

kate_mills's picture
Doctoral student, UCL Institute of Cognitive Neuroscience

Currently, the majority of individuals funded or employed to conduct scientific experiments have been trained in traditional academic settings. This includes not only the 12 years of compulsory education, but also another 6 to 10 years of university education—which are often followed by years of post-doctoral training. While this formal academic training undoubtedly equips individuals with the tools and resources to become successful scientists, informally trained individuals of all ages are just as able to contribute to our knowledge of the world through science.

These "citizen scientists" are often lauded for lightening the load on academic researchers engaged in big data projects. Citizen scientists have contributed to these projects by identifying galaxies or tracing neural processes, and typically without traditional incentives or rewards like payment or authorship. However, limiting the potential contributions of informally trained individuals to the roles of data-collector or data-processor discounts the abilities of citizen scientists to inform study design, as well as data analysis and interpretation. Soliciting the opinions of individuals who are participants in scientific studies (e.g., children, patients) can help traditional scientists design ecologically valid and engaging studies. Equally, these populations might have their own scientific questions, or provide new and diverse perspectives to the interpretation of results.

Importantly, science is not limited to adults. Children as young as eight have co-authored scientific reports. Teenagers have made important health discoveries with tangible outcomes. Unfortunately, these young scientists face many obstacles that institutionally funded individuals often take for granted, such as access to previously published scientific findings. The rise of open access publication, as well as many open science initiatives, make the scientific environment friendlier for citizen scientists. Unfortunately, many traditional science practices remain out of reach for those without sufficient funds.

What we think we know about ourselves through science could be skewed, since the majority of psychology studies sample individuals who do not represent the population on a whole. These WEIRD (Western, Educated, Industrialized, Rich, Democratic) samples make up the majority of non-clinical neuroimaging studies as well. Increased awareness of this bias has prompted researchers to actively seek out more representative samples. However, there is less discussion or awareness around the potential biases introduced by WEIRD scientists.

If most funded and published scientific research is conducted by a sample of individuals that have been trained to be successful in academia, then we are potentially biasing scientific questions and interpretations. Individuals who might not fit into an academic mould, but nevertheless are curious to know the world through the scientific method, face many barriers. Crowd funded projects (and even scientists) are beginning to receive recognition from fellow scientists dependent on dwindling numbers of grants and academic positions. However, certain scientific experiments are more difficult, if not impossible, to conduct without institutional support, e.g., studies involving human participants. Community-supported checks and balances remain essential for scientific projects, but perhaps they too can become unbound from traditional academic settings.

The means for collecting and analyzing data are becoming more accessible to the public each day. New ethical issues will need to be discussed and infrastructures built to accommodate those conducting research outside of traditional settings. With this, we will see an increase in the number of scientific discoveries made by informally trained "citizen scientists" of all ages and backgrounds. These previously unheard voices will add valuable contributions to our knowledge of the world.

athena_vouloumanos's picture
Associate Professor of Psychology, Director, NYU Infant Cognition and Communication Lab, New York University

In evolution classes, Lamarckism–the notion promoted by Lamarck that an organism could acquire a trait during its lifetime and pass that trait to its offspring–is usually briefly discussed and often ridiculed. Darwin's theory of natural selection is presented as the one true mechanism of evolutionary change.

In Lamarck's famous example, giraffes that ate leaves from higher branches could potentially grow longer necks than giraffes that ate from lower branches, and pass on their longer necks to their offspring. The inheritance of acquired characteristics was originally considered a legitimate theory of evolutionary change, with even Darwin proposing his own version of how organisms might inherit acquired characteristics.

Experimental hints of intergenerational transfer of acquired traits came in 1923 when Pavlov reported that while his first generation of white mice needed 300 trials to learn where he hid food, their offspring needed only 100, and their grandchildren only 30. But Pavlov's description didn't make clear whether the mice were all housed together allowing for some communication between mice or other kinds of learning. Still other early studies of potential intergenerational trait transfer in plants, insects, and fish also suffered from alternative interpretations or poorly controlled experiments. Lamarckism was dismissed.

But more recent studies–using modern reproduction techniques like in vitro fertilization and proper controls–can physically isolate generations from each other and rule out any kind of social transmission or learning. For example, mice that were fear-conditioned to an otherwise neutral odor produced baby mice that also feared that odor. Their grandbaby mice feared it too. But unlike in Pavlov's studies, communication couldn’t be the explanation. Because the mice never fraternized, and cross-fostering experiments further ruled out social transmission, the newly acquired specific fear had to be encoded in their biological material. (Biochemical analysis showed that the relevant change was likely in the methylation of olfactory reception genes in the sperm of the parents and offspring. Methylation is one example of an epigenetic mechanism.) Natural selection is still the primary shaper of evolutionary change, but the inheritance of acquired traits might play an important role too.

These findings fit in a relatively new field of study called epigenetics. Epigenetic control of gene expression contributes to cells in a single organism (which share the same DNA sequence) developing differently into e.g. heart cells or neurons. But the last decade has shown actual evidence–and possible mechanisms–for how the environment and the organism's behavior in it might cause heritable changes in gene expression (with no change in the DNA sequence) that are passed onto offspring. In recent years, we have seen evidence of epigenetic inheritance across a wide range of morphological, metabolic, and even behavioral traits.

The intergenerational transmission of acquired traits is making a comeback as a potential mechanism of evolution. It also opens up the interesting possibility that better diet, exercise, and education which we thought couldn't affect the next generation–except with luck through good example–actually could.
 

tor_n_rretranders's picture
Writer; Speaker; Thinker, Copenhagen, Denmark

The concept of altruism is ready for retirement.

Not that the phenomenon of helping others and doing good to other people is about to go away, not at all. On the contrary, the appreciation of the importance of bonds between individuals is on the rise in the modern understanding animal and human societies.

What needs to go away is the basic idea behind the concept of altruism: the idea that there is a conflict of interest between helping yourself and helping others.

The word altruism was coined in the 1850s by the great French sociologist Auguste Comte. What is means is that you do something for other people (the Old French altrui from the Latin alter), not just for yourself. Thus, it opposes egoism or selfishness.

But then this concept is rooted in the notion that human beings (and animals) are really dominated by selfishness and egoism so that you need a concept to explain why they sometimes behave unselfish and kind to others.

But the reality is different: Humans are deeply bound to other humans and most actions are really reciprocal and in the interest of both parties (or, in he case of hatred, in the disinterest of both). The starting point is neither selfishness nor altruism, but the state of being bound together. It is an illusion to believe that you can be happy when no one else is. Or that other people will not be affected by your unhappiness.

Behavioral science and neurobiology has shown how intimately we are bound: Phenomena like mimicry, emotional contagion, empathy, sympathy, compassion and prosocial behavior are evident in humans and animals. We are influenced by the well-being of others in more ways than we normally care to think of. Therefore a simple rules applies: Everyone feels better when you are well. You feel better when everyone is well.

This correlated state is the real one. The ideas of egoism and hence its opposite concept altruism are second-order concepts, shadows or even illusions.

This applies also to the immediate psychological level: If helping others fills you with a warm and rewarding feeling of glow, as it is called in experimental economics, is it not also in your own interest to help others? Are you not, then, helping yourself in helping others? Is it not in your own interest to help? Being kind to others means that you are being kind to yourself.

Likewise, if you feel better and make more money when you are generous and contribute to the wellbeing and resources of other people—like in the welfare societies like my own Denmark that became very rich through sharing and equality—then the person who wants to keep everything for himself, with no gift-giving, no tax-paying and no openness, is just an amateur egoist. Real egoists share.

Therefore, it is not altruistic to be an altruist. Just wise.

Helping others is in your own interest, so we do not need a concept to explain that behavior. Auguste Comte's concept is therefore ready for retirement.

And we can all just help each other without wondering why.

june_gruber's picture
Assistant Professor of Psychology, University of Colorado, Boulder

One idea in the study of emotion and its impact on psychological health is overdue for retirement: that negative emotions (like sadness or fear) are inherently bad or maladaptive for our psychological well-being, and positive emotions (like happiness or joy) are inherently good or adaptive. Such value judgments are to be understood, within the framework of affective science, as depending on whether an emotion impedes or fosters a person's ability to pursue goals, attain resources, and function effectively within society. Claims of the sort "sadness is inherently bad" or "happiness is inherently good" must be abandoned in light of burgeoning advances in the scientific study of human emotion.

Let's start with negative emotions. Early hedonic theories defined well-being, in part, as the relative absence of negative emotion. Empirically based treatments like cognitive-behavioral therapy also focus heavily on the reduction of negative feelings and moods as part of enhancing well-being. Yet a strong body of scientific work suggests that negative emotions are essential to our psychological well-being. Here are 3 examples. First, from an evolutionary perspective, negative emotions aid in our survival—they provide important clues to threats or problems that need our attention (such as an unhealthy relationship or dangerous situation). Second, negative emotions help us focus: they facilitate more detailed and analytic thinking, reduce stereotypic thinking, enhance eyewitness memory, and promote persistence on challenging cognitive tasks. Third, attempting to thwart or suppress negative emotions—rather than accept and appreciate them—paradoxically backfires and increases feelings of distress and intensifies clinical symptoms of substance abuse, overeating, and even suicidal ideation. Counter to these hedonic theories of well-being, negative emotions are hence not inherently bad for us. Moreover, the relative absence of them predicts poorer psychological adjustment.

Positive emotions have been conceptualized as pleasant or positively valenced states that motivate us to pursue goal-directed behavior. A longstanding scientific tradition has focused on the benefits of positive emotions, ranging from cognitive benefits such as enhanced creativity, social benefits like fostering relationship satisfaction and prosocial behavior, and physical health benefits such as enhanced cardiovascular health. From this work has emerged the assumption—both implicitly and explicitly—that positive emotional states should always be maximized. This has fueled the birth of entire subdisciplines and garnered momentous popular attention. But there's a mounting body of work against the claim that positive emotions are inherently good. First, positive emotions foster more self-focused behavior, including increased selfishness, greater stereotyping of out-group members, increased cheating and dishonesty, and decreased empathic accuracy in some contexts. Second, positive emotions are associated with greater distractibility and impaired performance on detail-oriented cognitive tasks. Third, because positive emotion may promote decreased inhibition it has been associated with greater risk-taking behaviors and mortality rates. Indeed, the presence of positive emotions is not always adaptive and sometimes can impede our well-being and even survival.

We are left to conclude that valence is not value: we cannot infer value judgments about emotions on the basis of their positive or negative valence. There is no intrinsic goodness or badness of an emotion merely because of its positivity or negativity, respectively. Instead, we must refine specific value-based determinants for an emotion's functionality. Towards this end, emerging research highlights critical variables to focus on. Importantly, the context in which an emotion unfolds can determine whether it helps or hinders an individual's goal, or which types of emotion regulatory strategies (reappraising or distracting) will best match the situation. Related, the degree of psychological flexibility someone possesses—including how quickly one can shift emotions or rebound from a stressful situation—promotes critical clinical health outcomes. Likewise, we find that psychological well-being is not entirely determined by the presence of one type or kind of an emotion but rather an ability to experience a rich diversity of both positive and negative emotions. Whether or not an emotion is "good" or "bad" seems to have surprisingly little to do with the emotion itself, but rather how mindfully we ride the ebbs and tides of our rich emotional life.
 

dean_ornish's picture
Founder and President of the non-profit Preventive Medicine Research Institute

It is a commonly held but erroneous belief that a larger study is always more rigorous or definitive than a smaller one, and a randomized controlled trial is always the gold standard . However, there is a growing awareness that size does not always matter and a randomized controlled trial may introduce its own biases. We need more creative experimental designs.

In any scientific study, the question is: "What is the likelihood that observed differences between the experimental group and the control group are due to the intervention or due to chance?" By convention, if the probability is less than 5% that the results are due to chance, then it is considered statistically significant, i.e., a real finding. 

A randomized controlled trial (RCT) is based on the idea that if you randomly-assign subjects to an experimental group that receive an intervention or to a control group that does not, then any known or unknown differences between the groups that might bias the study are as likely to affect one group as another. 

While that sounds good in theory, in practice a RCT can often introduce its own set of biases and thus undermine the validity of the findings. 

For example, a RCT may be designed to determine if dietary changes may prevent heart disease and cancer. Investigators identify patients who meet certain selection criteria, e.g., that they have heart disease. When they meet with prospective study participants, investigators describe the study in great detail and ask, "If you are randomly-assigned to the experimental group, would you be willing to change your lifestyle?" In order to be eligible for the study, the patient needs to answer, "Yes."

However, if that patient is subsequently randomly-assigned to the control group, it is likely that this patient may begin to make lifestyle changes on their own, since they have already been told in detail what these lifestyle changes are. If they're studying a new drug that only is available to the experimental group, then it is less of an issue. But in the case of behavioral interventions, those who are randomly-assigned to the control group are likely to make at least some of these changes because they believe that the investigators must think that these lifestyle changes are worth doing or they wouldn't be studying them. 

Or, they may be disappointed that they were randomly-assigned to the control group, and so they are more likely to drop out of the study, creating selection bias. 

Also, in a large-scale RCT, it is often hard to provide the experimental group enough support and resources to be able to make lifestyle changes. As a result, adherence to these lifestyle changes is often less than the investigators may have predicted based on earlier pilot studies with smaller groups of patients who were given more support. 

The net effect of the above is to (a) reduce the likelihood that the experimental group will make the desired lifestyle changes, and (b) increase the likelihood that the control group will make similar lifestyle changes. This reduces the differences between the groups and makes it less likely to show statistically significant differences between them. 

As a result, the conclusion that the intervention had no significant effect may be misleading. This is known as a "type 2 error" meaning that there was a real difference but these design issues obscured the ability to detect them. 

That's just what happened in the Women's Health Initiative study, which followed nearly 49,000 middle-aged women for more than eight years. The women in the experimental group were asked to eat less fat and more fruits, vegetables, and whole grains each day to see if it could help prevent heart disease and cancer. The women in the control group were not asked to change their diets. 

However, the experimental group participants did not reduce their dietary fat as recommended—over 29 percent of their diet was comprised of fat, not the study's goal of less than 20 percent. Also, they did not increase their consumption of fruits and vegetables very much. In contrast, the control group reduced its consumption of fat almost as much and increased its consumption of fruits and vegetables, diluting the between-group differences to the point that they were not statistically significant. The investigators reported that these dietary changes did not protect against heart disease or cancer when the hypothesis was not really tested. 

Paradoxically, a small study may be more likely to show significant differences between groups than a large one. The Women's Health Initiative study cost almost a billion dollars yet did not adequately test the hypotheses. A smaller study provides more resources per patient to enhance adherence at lower cost. 

Also, the idea in RCTs that you're changing only one independent variable (the intervention) and measuring one dependent variable (the result) is often a myth. For example, let's say you're investigating the effects of exercise and its effects on preventing cancer. You devise a study whereby you randomly assign one group to exercise and the other group to no exercise. On paper, it appears that you're only working with one independent variable.

In actual practice, however, when you place people on an exercise program, you're not just getting them to exercise; you're actually affecting other factors that may confound the interpretation of your results even if you're not aware of them. 

For example, people often exercise with other people, and there's increasing evidence that enhanced social support significantly reduces the risk of most chronic diseases. You're also enhancing a sense of meaning and purpose by participating in a study, and these also have therapeutic benefits. And when people exercise, they often begin to eat healthier foods. 

We need new, more thoughtful experimental designs and systems approaches that take into account these issues. Also, new genomic insights will make it possible to better understand individual variations to treatment rather than hoping that this variability will be "averaged out" by randomly-assigning patients.

kate_jeffery's picture
Professor of Behavioural Neuroscience, Dept. of Experimental Psychology, University College London

We humans have had a tough time coping with our unremarkable place in the grand scheme of things. First Copernicus trashed our belief that we live at the centre of the universe, followed shortly thereafter by Herschel and co. who suggested that our sun was not at the centre of it either; then Darwin came along and showed that according to our biological heritage, we are just another animal. But we have clung on for dear life to one remaining belief about our specialness; that we, and we alone, have conscious minds. It is time to retire, or indeed euthanize and cremate, this anthropocentric pomposity.

Descartes thought of animals as mindless automata, and vivisection without anesthetic was common among early medical researchers. Throughout much of the 20th century, psychologists believed that animals—while clearly resembling humans in their neuroanatomy—perform their activities essentially unthinkingly, a viewpoint that reached its zenith (or perhaps, my preferred word, nadir) in Behaviorism, the psychological doctrine that rejects inner mental states like plans and purposes as unable to be studied, or—in the radical version—as not even existing. The undeniable fact that humans have inner mental states and purposes was attributed to our special psychological status: we have language, and therefore we are different. Animals remain essentially Cartesian automata though.

Many of our scientific experiments have validated this view. Rats in a skinner box (named after the most radical Behaviorist of all, BF Skinner) do indeed appear to act mindlessly—they press the levers over and over again, they seem slow to learn, slow to adapt to new contingencies, they don't really seem to think about what they are doing. Furthermore, in further testament to its mindlessness, quite large regions of the brain can be damaged without affecting performance. Rats in a maze seem similarly clueless—they take a long time to learn (weeks to months sometimes) and a long time to adapt to change. Clearly, rats and other animals are stupid—and more than that, they are mindless.

Fond though I am of rats, I would not wish to defend their intelligence. But the assumption that they do not have inner mental states needs examining. Behaviorism arose from the argument of parsimony (Occam's razor)—why postulate mental states in animals when their behavior can be explained in simpler ways? The success of Behaviorism arose in part from the fact that the kinds of behaviors studied back then could, indeed, be explained by operation of mindless, automatic processes. It does not take deep reflection to press a lever in a skinner box any more than it does to key in your PIN. But in the mid-20th century, a development occurred that began to overturn the view that all behavior is mindless. This development was single neuron recording, the ability to follow the activity of individual brain cells—the little cogs and sprockets that make up the workings of the actual brain. Using this technique, behavioral electrophysiologists have been able to actually see, for themselves, the operation of inner mental processes in animals.

The most striking discovery along these lines has been the place cells, neurons in the hippocampus, a small but vitally important structure located deep in the temporal lobes. Place cells are (we now know) key components of an internal representation of the environment—often called the cognitive map—which forms when an animal explores a new place, and which reactivates when the animal re-enters that place. Single neuron recording shows us that this map forms spontaneously, in the absence of reward and independently of the animal's behavior. When an animal is choosing between alternative routes to a goal, place cells representing the alternative possibilities become spontaneously active even though the animal has not gone there yet—as if the animal is thinking about the choices. Place cells certainly seem to be an internal representation: furthermore, we humans have them too, and human place cells reactivate when people think about places.

Place cells may well be an internal representation of the kind eschewed by behaviorists, but does this mean, though, that rats and other animals have minds? Not necessarily… place cells could still be part of an automatic and unconscious representation system. Our own ability to conjure up remembered or imagined images "in our mind's eye" to use for recollection or planning might still be special. This seems unlikely though, doesn't it? Mindlessness would only be a parsimonious conjecture if we didn't know about our own minds. But we do… and we know that we are extraordinarily like animals in every respect, right down to the place cells. To suppose that the ability to mentally represent the outside world sprang into existence, fully formed, in the evolutionary transition (if the concept of "transition" even makes sense) between animals and humans seems improbable at best, deeply arrogant at worst. When we look into the animal brain we see the same things we see in our own brains. Of course we do, because we are just animals after all. It is time to admit yet again that we are not all that special. If we have minds, creatures with brains very like ours probably do too. Unravelling the mechanisms of these minds will be the great challenge for the coming decades.
 

eduardo_salcedo_albaran's picture
Philosopher; Director, Scientific Vortex, Inc.

It sounds logical to say that in order to understand crime, you must focus on criminals and felonies. But advances in social science give us reason to reconsider this idea.

Heroin is trafficked from Turkey across the Kapitan Andreevo security checkpoint in Bulgaria, to be sold in the richest countries of the European Union. More illegal drugs arrive in Europe from South America, via the countries of Eastern Africa. In South Africa, racketeers, private security firms, and arms dealers conduct businesses together, erasing the boundaries of legal and illegal financial procedures.

In Mexico, ferrous material, hydrocarbon condensate, and illegal drugs are trafficked and sold to both legal and illegal firms and individuals inside the United States. "Los Zetas" and other criminal networks operating in Central America also engage in human trafficking, kidnapping, and murdering of migrants before they cross the American border. Between 2006 and 2010, some of those criminal networks laundered $881 millions dollars through a single legal bank inside the United States. In fact, in 2012, the Criminal Division of the Department of Justice pointed out that the same Bank "failed to monitor" $9.4 billion dollars during that same period.

Those who have ever sent or received wire transfers, both inside and outside the United States, will find it difficult to understand how one of the most important banks worldwide could "fail to monitor" $9.4 billion.

In all of these cases, the participation of legitimate public servants and "legal" individuals and corporations is essential. In all of these cases, bankers, attorneys, police and border officials, flight controllers, mayors, governors, presidents, and politicians co-opt and are co-opted by crime. Sometimes they are the instruments, and sometimes they are the structural bridges connecting legality and illegality. They provide information, money, protection, knowledge, and social capital to criminal networks; a reason to define them as "unlawful" actors. However, they operate within legal agencies, which is a reason to define them as "lawful" actors. They seem to be both lawful and unlawful at the same time. They are what we call "gray" actors, located and operating on the boundaries of legality and illegality. They don't appear in the charts of the criminal organizations, although they provide relevant inputs for successful criminal operation.

Despite the significant role of these "gray" actors, social scientists interested in analyzing crime usually focus their attention only on criminal individuals and criminal actions. Those scientists usually study crime through qualitative and quantitative data that informs only of those "dark" elements, while omitting the fact that transnational and domestic crime is carried out by various types of actors who don't interact solely through criminal actions. This is a hyper-simplified approach—a caricature—because those "dark" elements are only the tip of the iceberg regarding global crime.

This simplified approach also assumes that society is a digital and binary system in which the "good" and the "bad" guys—the "us" and "them"—are perfectly distinguishable. This distinction is useful in penal terms when simple algorithms—"if individual X executes the action Y, then X is criminal"—orient the decision of judges delivering final sentences. However, in sociological, anthropological, and psychological terms, this line is more difficult to define. If society is a digital system, it is certainly not a binary one.

This does not mean that crime is completely relative, or that we are all criminals because we are indirectly related to someone who committed a crime. This only means that defining and analyzing crime should not consist of a simple binary criterion such as belonging to a group or executing a single action. This criterion is useful when studying the boss of a criminal group, or the specific action of shooting a gun and committing murder. However, most of the time, affiliations and actions are complex and fuzzy. This vagueness of reality explains why we, as society, rely and trust the intuition of a judge, a person who, despite simple algorithms, considers various elements such as intentions, context, and effects when deciding a sentence. This is why we are not designing software for convicting criminals and assigning sentences—it's complicated.

Current tools for organizing, associating and visualizing large amounts of data are useful for understanding the complexity of crime. Formulating explicative models through social network analysis or predictive models through machine learning that integrates several variables, are examples of useful procedures. However, those procedures usually escape classic distinctions between "right" and "wrong" or the fragmentation of scientific bodies. Good and evil, right and wrong, legal and illegal—these are all context-driven.

Economists, psychologists, anthropologists and sociologists are often uncomfortable with the mixture of concepts required for analyzing complexity of behavior. Facing this complexity requires integrating categories from multiple scientific domains, moving quickly between macro and micro characteristics, and even adopting new models of causality. This sounds like an impossible enterprise inside traditional scientific spaces.

Social scientists have the moral commitment to use the most accurate tools of observation when analyzing data and phenomena, because their observations inform the design and enforcement of policies. If inaccurate tools are used, bad decisions are made, like a doctor diagnosing a tumor just by measuring body temperature. When the science we are studying is about understanding human trafficking, mass murders or terrorism, using the best tools and providing the best inputs mean preserving lives.
It is therefore time to retire the idea that understanding crime means understanding the minds and actions of criminals. Additionally, we must also retire other naïve ideas such as "organized crime" or that any current State or government evolves without any criminal influence. These are nicely simplified concepts that work well in theoretical models, contained within the walls of classrooms and the pages of journals that manage to evade the complexity and vagueness of society. However, if we do not deal with the true complexity and vagueness of society using the diverse tools provided by science, we'll have to deal with it in the streets, the courtrooms, or when facing threats, like it or not.

ross_anderson's picture
Professor of Security Engineering at Cambridge University

Max Planck famously described the progress of quantum physics as being "one funeral at a time" as the old-school physicists died off and their jobs were taken by young men who followed the new quantum religion.

This brutal style of scientific revolution has left some rather rigid scar tissue. For many years it has been almost taboo to suggest that the questions at the foundations of quantum mechanics might actually have an answer. Yet new results in different areas of physics, chemistry and engineering are beginning to suggest that there might possibly be an answer after all.

At the Solvay Conference in 1927, Niels Bohr and Werner Heisenberg out-debated Albert Einstein and Louis de Broglie; they persuaded the world that we should just take the tools of the new quantum mechanics on trust rather than trying to derive them from underlying classical principles. This Copenhagen school, the "shut up and calculate" school, of quantum mechanics rapidly became the orthodoxy. It was reinforced when calculations by John Bell were experimentally verified by Alain Aspect in 1982 and appeared to show that reality at the quantum level could not be both local and causal.

While some philosophers of physics toyed with exotic interpretations of quantum mechanics, most physicists shrugged; they accepted that quantum foundations were a "certified insoluble" problem, and told their graduate students not to even think about wasting their lives on that. Others just loved the idea that physics proves the world is too complex to understand, and that the proof is beyond the comprehension of outsiders. Physicists could be the new high priests as the quantum became the core magic. Recently we've got quantum with everything, from cryptography to biology; the word has become a magic spell for fundraising. So long as no-one dared challenge this for fear of being thought a crank or dismissed as an outsider, we were stuck.

Things are starting to change. In physics, Yves Couder and Emmanuel Fort found that bouncing droplets on a bath of vibrating oil mimic many phenomena previously thought unique to the quantum world, including single-slit and double-slit refraction, tunnelling and quantised energy levels. In chemistry, Masanao Ozawa and Werner Hofer have shown that the uncertainty principle is only approximately true: modern scanning probe microscopes can often measure the position and momentum of atoms slightly more accurately than Heisenberg predicted – which should worry people who claim that quantum cryptography is "provably" secure! In computing, the promised quantum computers are still stuck at factoring 15, despite hundreds of millions in research funding over almost twenty years. And the physicist Theo van Nieuwenhuizen has pointed out a contextuality loophole in Bell's theorem that looks rather hard to fix.

There's a striking parallel with another big problem in science—consciousness. For years, the few first-division academics who dared tackle such problems tended to be near retirement and famous enough to shrug off disapproval; just as Dan Dennett and Nick Humphrey wrote on consciousness, Tony Leggett and Gerard 't Hooft wrote on quantum foundations. So the flame was kept alight. But it's time to bring some tinder. Viennese physicists have now organised two symposia on emergent quantum mechanics, as people finally dare to wrestle with what might be going on down there.

So the idea I'd like to retire is the idea that some questions are just too big for normal working scientists to tackle. Old-timers should not try to erect taboos around the problems that eluded us. We must cheerfully challenge the young: "prove us wrong!" As for young scientists, they should dare to dream, and to aim high.

ian_bogost's picture
Ivan Allen College Distinguished Chair in Media Studies and Professor of Interactive Computing, Georgia Institute of Technology; Founding Partner, Persuasive Games LLC; Contributing Editor, The Atlantic


“No topic is left unexplored,” reads the jacket blurb of The Science of Orgasm, a 2006 book by an endocrinologist, a neuroscientist, and a “sexologist.” A list of topics covered includes the genital-brain connection and how the brain produces orgasms. The result, promises the jacket blurb, “illuminates the hows, whats, and wherefores of orgasm.”

Its virtues or faults notwithstanding, The Science of Orgasm exemplifies a trend that has become nearly ubiquitous in popular discourse: that a topic can be best and most thoroughly understood from the vantage point of “science.” How common is this approach? Google Books produces nearly 150 million search results for the phrase “the science of”—including dozens of books with the quip in their titles. The science of smarter spending; the science of composting; the science of champagne; the science of fear; the science of acting; the list goes on.

“The science of X” is one example of the rhetoric of science—the idea that anything called “science” is science—but not the only one. There’s also “scientists have shown” or its commoner shorthand “studies show,” phrases that make appeals to the authority of science whether or not the conclusions they summarize bear any resemblance to the purported studies from which those conclusions were derived.

Both of these tendencies could rightly be accused of scientism, the view that empirical science entails the most complete, authoritative, and valid approach to answering questions about the world. Scientism isn’t a newly erroneous notion, but its an increasingly popular one. Recently, Stephen Hawking pronounced philosophy “dead” because it hasn’t kept up with advances in physics. Scientism assumes that the only productive way to understand the universe is through the pursuit of science, and that all other activities are lesser at best, pointless at worst.

And to be sure, the rhetoric of science has arisen partly as thanks to scientism.  “Science of X” books and research findings traceable to an origin in apparently scientific experimentation increasingly take the place of philosophical, interpretive, and reflective accounts of the meaning and importance of activities of all kinds. Instead of pondering the social practices of sparkling wine and its pleasures, we ponder what the size of its bubbles indicates about its quality, or why that effervescence lasts longer in a modern, fluted glass as opposed to a wider champagne coupe.

But the rhetoric of science doesn’t just risk the descent into scientism. It also gives science sole credit for something that it doesn’t deserve: an attention to the construction and operation of things. Most of the “science of X” books look at the material form of their subject, be it neurochemical, computational, or economic. But the practice of attending to the material realities of a subject has no necessary relationship to science at all. Literary scholars study the history of the book, including its material evolution from clay tablet to papyrus to codex. Artists rely on a deep understanding of the physical mediums of pigment, marble, or optics when they fashion creations. Chefs require a sophisticated grasp of the chemistry and biology of food in order to thrive in their craft. To think that science has a special relationship to observations about the material world isn’t just wrong, it’s insulting.

Beyond encouraging people to see science as the only direction for human knowledge and absconding with the subject of materiality, the rhetoric of science also does a disservice to science itself. It makes science look simple, easy, and fun, when science is mostly complex, difficult, and monotonous.

A case in point: the popular Facebook page “I f*cking love science” posts quick-take variations on the “science of x” theme, mostly images and short descriptions of unfamiliar creatures like the pink fairy armadillo, or illustrated birthday wishes to famous scientists like Stephen Hawking. But as the science fiction writer John Skylar rightly insisted in a fiery takedown of the practice last year, most people don’t f*cking love science, they f*cking love photography—pretty images of fairy armadillos and renowned physicists. The pleasure derived from these pictures obviates the public’s need to understand how science actually gets done—slowly and methodically, with little acknowledgement and modest pay in unseen laboratories and research facilities.

The rhetoric of science has consequences. Things that have no particular relation to scientific practice must increasingly frame their work in scientific terms to earn any attention or support. The sociology of Internet use suddenly transformed into “web science.” Long accepted practices of statistical analysis have become “data science.” Thanks to shifting educational and research funding priorities, anything that can’t claim that it is a member of a STEM (science, technology, engineering, and math) field will be left out in the cold. Unfortunately, the rhetoric of science offers the most tactical response to such new challenges. Unless humanists reframe their work as “literary science,” they risk getting marginalized, defunded and forgotten.

When you’re selling ideas, you have to sell the ideas that will sell. But in a secular age in which the abstraction of “science” risks replacing all other abstractions, a watered-down, bland, homogeneous version of science is all that will remain if the rhetoric of science is allowed to prosper.

We need not choose between God and man, science and philosophy, interpretation and evidence. But ironically, in its quest to prove itself as the supreme form of secular knowledge, science has inadvertently elevated itself into a theology. Science is not a practice so much as it is an ideology. We don’t need to destroy science in order to bring it down to earth. But we do need to bring it down to earth again, and the first step in doing so is to abandon the rhetoric of science that has become its most popular devotional practice.

bart_kosko's picture
Information Scientist and Professor of Electrical Engineering and Law, University of Southern California; Author, Noise, Fuzzy Thinking

It is time for science to retire the fiction of statistical independence. 

The world is massively interconnected through causal chains. Gravity alone causally connects all objects with mass. The world is even more massively correlated with itself. It is a truism that statistical correlation does not imply causality. But it is a mathematical fact that statistical independence implies no correlation at all. None. Yet events routinely correlate with one another. The whole focus of most big-data algorithms is to uncover just such correlations in ever larger data sets. 

Statistical independence also underlies most modern statistical sampling techniques. It is often part of the very definition of a random sample. It underlies the old-school confidence intervals used in political polls and in some medical studies. It even underlies the distribution-free bootstraps or simulated data sets that increasingly replace those old-school techniques. 

White noise is what statistical independence should sound like. 

The hisses and pops and crackles of true white-noise samples are all statistically independent of one another. This holds no matter how close the noise samples are in time. That means the frequency spectrum of white noise is flat across the entire spectrum. Such a process does not exist because it would require infinite energy. That has not stopped generations of scientists and engineers from assuming that white noise contaminates measured signals and communications. 

Real noise samples are not independent. They correlate to some degree. Even the thermal noise that bedevils electronic circuits and radar devices has only an approximately flat frequency spectrum and then over only part of the spectrum. Real noise does not have a flat spectrum. Nor does it have infinite energy. So real noise is colored pink or brown or some other strained color metaphor that depends on how far the correlation reaches among the noise samples. Real noise is not and cannot be white.

A revealing problem is that there are few tests for statistical independence. Most tests tell at most whether two variables (not the data itself) are independent. And most scientists would be hard pressed to name even them. 

So the overwhelming common practice is simply to assume that sampled events are independent. Just assume that the data is white. Just assume that the data are not only from the same probability distribution but that the data are statistically independent. An easy justification for this is that almost everyone else does it and it's in the textbooks. This assumption has to be one of the most widespread instances of groupthink in all of science.

The reason we so often assume statistical independence is not its real-world accuracy. We assume statistical independence because of its armchair appeal: It makes the math easy. It often makes the intractable tractable.  

Statistical independence splits compound probabilities into products of individual probabilities. (Then often a logarithm converts the probability product into a sum because it is easier still to work with sums than products). And it is far easier to lecture would-be gamblers that successive coin flips are independent than to conduct the fairly extensive experiments with conditional probabilities required to factually establish such a remarkable property. That holds because in general a compound or joint probability always splits into a product of conditional probabilities. The so-called multiplication rule guarantees this factorization. Independence further reduces the conditional probabilities to unconditional ones. Removing the conditioning removes the statistical dependency. 

Andrei Markov made the first great advance over independence or whiteness when he studied events that statistically depend on only the immediate past. That was over a century ago. 

We still wrestle with the math of such Markov chains and find surprises. The Google search algorithm rests in large part on finding the equilibrium eigenvector of a finite Markov chain. The search model assumes that Internet surfers jump at random from web page to web page much as a frog hops from lily pad to lily pad. The jumps and hops are not statistically independent. But they are probabilistic. The next web page you choose depends on the page you are now looking at. Real web surfing may well involve probabilistic dependencies that reach back to several visited web sites. It is a good bet that the human mind is not a Markov process. Yet relaxing independence to even one-step or two-step Markov dependency has proven a powerful way to model diverse streams of data from molecular diffusion to speech translation.

It takes work to go beyond the simple Markov property where the future depends only the present and not on the past. But we have ever more powerful computers that do just such work. And many more insights will surely come from the brains of motivated theoreticians. Giving up the crutch of statistical independence can only spur more such results.

Science needs to take seriously its favorite answer: It depends.

tom_griffiths's picture
Henry R. Luce Professor of Information Technology, Consciousness and Culture, Director of the Computational Cognitive Science Lab, Princeton University; Co-author (with Brian Christian), Algorithms to Live By

Being biased seems like a bad thing. Intuitively, rationality and objectivity are equated—when faced with a difficult question, it seems like a rational agent shouldn't have a predisposition to favor one answer over another. If a new algorithm designed to find objects in images or interpret natural language is described as being biased, it sounds like a poor algorithm. And when psychology experiments show that people are systematically biased in the judgments they form and the decisions they make, we begin to question human rationality.

But bias isn't always bad. In fact, for certain kinds of questions, the only way to produce better answers is to be biased.

Many of the most challenging problems that humans solve are known as inductive problems—problems where the right answer cannot be definitively identified based on the available evidence. Finding objects in images and interpreting natural language are two classic examples. An image is just a two-dimensional array of pixels—a set of numbers indicating whether locations are light or dark, green or blue. An object is a three-dimensional form, and many different combinations of three-dimensional forms can result in the same pattern of numbers in a set of pixels. Seeing a particular pattern of numbers doesn't tell us which of these possible three-dimensional forms are present: we have to weigh the available evidence and make a guess. Likewise, extracting the words from the raw sound pattern of human speech requires making an informed guess about the particular sentence a person might have uttered.

The only way to solve inductive problems well is to be biased. Because the available evidence isn't enough to determine the right answer, you need to have predispositions that are independent of that evidence. And how well you solve the problem—how often your guesses are correct—depends on having biases that reflect how likely different answers are.

Human beings are very good at solving inductive problems. In finding objects in images and interpreting natural language are two problems that people still solve better than computers. And the reason is that human minds have biases that are finely tuned for solving these problems.

The biases of the human visual system are apparent in many visual illusions—images that result in a surprising discrepancy between our biased guesses and what's actually in the world. The rarity of visual illusions in real life is testimony to the utility of those biases. By studying the kinds of illusions the human visual system is susceptible to, we can identify the biases that guide perception and instantiate those biases in algorithms used by computers.

Human biases in interpreting language are demonstrated in the game of Telephone, or when we misinterpret the lyrics of a song. It's also easy to discover the biases that have been built into speech recognition software. I once left my office for a meeting, locking the door behind me, and came back to find a stranger had broken in and typed a series of poetic sentences into my computer. Who was this person, and what did the message mean? After a few spooky, puzzling minutes, I realized that I had left my speech recognition software running, and the sentences were the guesses it had produced about what the rustling of the trees outside my window meant. But the fact that they were fairly intelligible English sentences reflected the biases of the software, which didn't even consider the possibility that it was listening to the wind rather than a person.

Things that people do well—vision and language—depend heavily on being biased towards particular answers. Algorithms that solve those problems well have similar biases. So we shouldn't be surprised to discover that people are systematically biased in other domains. These biases don't necessarily reflect a deviation from rationality—they reflect the difficulty of the problems that humans need to solve. And one way to make computers better at solving these problems is understanding exactly what human biases are like for different problems.

In arguing that bias isn't always bad, I'm not claiming that it is always good. Objectivity can be an ideal that we strive for on moral grounds—say, when assessing other people. The more information and time we have available, the closer we can get to this ideal. But this kind of objectivity is a luxury, at odds with reaching the right answers in limited time from small amounts of evidence. When solving inductive problems, it can be rational to be biased.

sarah_demers's picture
Horace D. Taft Associate Professor of Physics, Yale University

The standard model of particle physics has aesthetic shortcomings that leave us with questions: Why so many free parameters? Why not an elegant, single fundamental force to account for all forces? Why three generations of quarks and leptons? Now that we have a mechanism for how fundamental particles acquire mass, why do they have those particular couplings to the Higgs field, covering such a huge range of masses? Why the even more extreme range of strengths of the fundamental forces? The common potential danger with each of these questions is the answer, "That's just the way it is."

In addition to these aesthetic concerns we have contradictions between observation and prediction in the explored universe: We have not found a source of energy to fuel our accelerating expansion. There is insufficient baryonic matter to explain astronomical observations. Sticking with the topic of matter, we thankfully live in a large pocket of it that shouldn't have survived annihilation. In fact, we see matter dominance everywhere we look and have no sufficient source of matter vs. anti-matter asymmetry to account for this. We may never access solutions to this set of problems, but it is clear that accounting for each of them requires at least a tweak and at best a fundamental re-write of existing models. Their issues go beyond inelegance.

Experimentalists, myself included, have been chasing aesthetically motivated, or partially aesthetically motivated, theories through the data. With a few years of running the Large Hadron Collider at the energy frontier and a host of careful measurements in particle, nuclear and atomic physics carried out all over the globe, large regions of "new physics" parameter space have been recently excluded. Theorists have answered with pivots and extensions, adapting their proposed models in ways that push us to more challenging experimental conditions.

This exchange has felt healthy and has definitely been fun. The close interactions have allowed for fast progress testing new ideas. Even though these searches for non-standard model physics have resulted in new limits rather than discovery, it has been thrilling to make measurements that might provide evidence toward a grand unified theory. However, our current era of scare resources requires tighter thinking. I think it's time to more carefully scrutinize our theoretical foundations.

Of course, including aesthetic considerations in the scientific toolbox has resulted in huge leaps forward. The drive for elegance has repeatedly enabled scientists to uncover underlying structure. The permission to consider aesthetics is part of what drew many of us to becoming scientists in the first place. I'm not arguing to abandon it forever. But we are currently in a data-rich period in particle physics after years (at least at the energy frontier) of being data-poor. Ensuring that data get the final say is more fundamental than anything else in the practice of science and the data we have in-hand have the potential to say a lot about the standard model. There is even more on the line when we consider which experiments to pursue next.

At this stage, with 96% of the universe's content in the dark, it is a mistake for us to put aesthetic concerns in the same realm as contradictions when it comes to theoretical motivation. With no explanation for dark energy, no confirmed detection of dark matter and no sufficient mechanism for matter/anti-matter asymmetry, we have too many gaps to worry about elegance. Theorists will keep pushing on grand unified theories, including developing the mathematics that will enable further progress. Experimentalists have an opportunity and responsibility to provide direction through agnostic hunts for discrepancies between our data and standard model predictions. This includes, of course, measuring the hell out of the newly discovered Higgs Boson.

It is time for us to admit that some of the models we have been chasing from our brilliant theory colleagues might actually be (gorgeous) Hail Mary passes to the universe. Our next significant jump of understanding will likely come because we are forced there by painstakingly determined constraints from the data rather than by a lucky good catch.

sarah_jayne_blakemore's picture
Psychology Professor, University of Cambridge; Author, Inventing Ourselves

Most people will have heard about the left-brain/right-brain idea. Maybe they have been told they're too 'left-brained' or want to be more 'right-brained'. The idea has made it into everyday parlance, has infiltrated schools everywhere, sells a lot of self-help books, and has even been used as the basis of scientific theories, for example with regards to gender differences in the brain. Yet it is an idea that makes no physiological sense.

Scientific lingo about how the two sides of the brain—the hemispheres—function has permeated mainstream culture, but the research is often wildly over-interpreted. The notion that the two hemispheres of the brain are involved in different 'modes of thinking' and that one hemisphere dominates over the other has become widespread, in particular in schools and the workplace. There are numerous websites where you can find out whether you are left-brained or right-brained and that offer to teach you how to change this.

This is pseudo-science and is not based on knowledge of how the brain works. While it is true that the brain is made up of two hemispheres and one hemisphere is often initially active before the other during actions, speech and perception, both sides of the brain work together in almost all situations, tasks and processes. The hemispheres are in constant communication with each other and it simply is not possible for one hemisphere to function without the other hemisphere 'joining in', except in certain rare patient populations. In other words, you are not right or left-brained. You use both sides of the brain.

Some people have proposed that education currently favours left-brain modes of thinking, which are supposed to be logical, analytical and accurate, while not putting enough emphasis on right-brain modes of thinking, which are supposed to be creative, intuitive, emotional and subjective. Certainly education should involve a wide variety of tasks, skills, learning and modes of thinking. However, it is just a metaphor to refer to these as right-brain or left-brain modes. Patients who have had a lesion in their right hemisphere are not devoid of creativity. Patients with a damaged left hemisphere might be unable to produce language (which relies on the left hemisphere in over 90% of the population) but can still be analytical.

Whether left-brain/right-brain notions should influence the way people are educated is highly questionable. There is no validity in categorizing people in terms of their abilities as either a left-brain or a right-brain person. In terms of education, such categorization might even act as an impediment to learning, not least because it might be interpreted as being innate or fixed to a large degree. Yes, there are large individual differences in cognitive strengths. But idea that people are left-brained or right-brained needs to be retired.

victoria_wyatt's picture
Associate Professor of History in Art, University of Victoria

It's time for "The Rocket Scientist" to retire.

This is "The Rocket Scientist" of cliche fame: "It doesn't take a Rocket Scientist to know...." 

"The Rocket Scientist" is a personage rather than a principle, and a fictitious personage at that. He (or she) was constructed by popular usage, not by scientists. Still, the cliche perpetuates outdated public perceptions of scientific principles, and that's critical. "The Rocket Scientist" needs a good retirement party.

I'll start with a disclaimer. My dreams of that retirement gala may appear tinged with professional envy. I have never heard anyone say, "It doesn't take an Ethnohistorian to know...."  I never will. So yes, the cliche does slight the humanities—but that's not my concern. Rather, "The Rocket Scientist," as popularly conceived, dangerously slights the sciences. Our earth cannot afford that.

"The Rocket Scientist" stands outside society, frozen on a higher plane. Widely embraced and often repeated, the cliche reflects a general public's comfort with divorcing science from personal experience. The cliche imposes a boundary (of brilliance) between the scientist and everyone else.

This makes for popular movies and television shows, but it's insidious. Artificially constructed boundaries isolate. They focus attention on differences and distinctions. In contrast, it's the exploration of relationships and process that feeds rapid scientific developments today: systems biology, epigenetics, neurology and brain research, astronomy, medicine, quantum physics. Complex relationships also shape the urgent challenges identified in this year's Edge question. Global epidemics, climate change, species extinction, finite resources—these all comprise integral interconnections.

Approaching such problems demands an appreciation of diversity, complexity, relationships and process. Popular understanding of contemporary science demands the same. We can only address urgent global issues when policy-makers see science clearly—when they view diversity, complexity, relationships and process as essential to understanding, rather than as obstacles to it.

At present, though, constructed boundaries pervade our institutions and policy structures, not only our cliches. Examples abound. Universities segment researchers and students into disciplinary compartments with discreet budgetary line items, competing for scarce resources. ("Interdisciplinary" makes a good buzzword, but the paradigm on which our institutions rest militates against it.) The model of nations negotiating as autonomous entities has failed abysmally to address climate change. In my provincial government’s bureaucracy, separate divisions oversee oceans and forests, as if a fatal barrier slices the ecosystem at the tideline.

Time suffers, too. Past gets alienated from present, and present from future, as our society zooms in on short term fiscal and political deadlines. Fragmented time informs all other challenges, and makes them all the more dire.

So much of our society still operates on a paradigm of simplification, compartmentalization and boundaries, when we need a paradigm of diversity, complexity, relationships and process. Our societal structures fundamentally conflict with the messages of contemporary science. How can policy-makers address crucial global issues while ignoring contemporary scientific principles?

The real world plays out as a video. The relationships between frames make the story comprehensible. In contrast, "The Rocket Scientist" stands like a snapshot, fictitiously yet firmly alone on a lofty pinnacle: apart from society, not a part of it. Yes, it's just a cliche, but language matters, and jokes instruct. It's time for "The Rocket Scientist" to retire.

I'll close with another disclaimer. I mean no offense to real rocket scientists. (Some of my best friends have been rocket scientists!) Real rocket scientists exist. They inhabit the real world, with all the attending interconnections, relationships and complexities. "The Rocket Scientist" embodies the opposite. We'll all be well served by that retirement.
 

ernst_p_ppel's picture
Head of Research Group Systems, Neuroscience and Cognitive Research, Ludwig-Maximilians-University Munich, Germany; Guest Professor, Peking University, China

Principia Mathematica Philosophiae Universalis by Isaac Newton is one of the fundamental works of modern science, and this is true not only for physics, but also for philosophy and the foundations of reasoning. Newton gives in "Scholium" the following definition: "Absolute, true, and mathematical time, of itself, and from its own nature, flows equably without relation to anything external." The underlying concept of continuity of time is expressed in mathematical formula describing for instance physical processes. This concept of continuity is almost never questioned.

The Newtonian concept of continuity of time is also implicitly assumed by Immanuel Kant, when he refers to time as an "apriori form of perception" in his Critique of Pure Reason. We read in the translation: "Time is not an empirical conception. For neither coexistence nor succession would be perceived by us, if the representation of time did not exist as a foundation à priori... Time is a necessary representation, lying at the foundation of all our intuitions."  

The concept of continuity of time is also hidden in another famous quotation ín psychology; William James writes in Principles of Psychology when he refers to the present: "In short, the practically cognized present is no knife-edge, but a saddle-back, with a certain breadth of its own on which we sit perched, and from which we look into two directions into time. The unit of composition of our perception of time is a duration, with a bow and a stern, as it were—a rearward—and forward-looking end...We seem to feel the interval of time as a whole, with its two ends embedded in it." Here we are confronted with the idea of a traveling moment, i.e. a temporal interval of finite duration is moving gradually through physical time (and not jumping), again assuming continuity of time. But is it really true and can it be used to understand neural and cognitive processes?

This theoretical concept of continuity of time in biological and psychological processes—usually appearing as an implicit assumption or as an "unasked question"—is wrong. The answer is very simple if one takes a look at the way organisms process information to overcome the complexity and temporal uncertainty of stimuli in the physical world. One source of complexity comes from stimulus transduction, which is principally different in the sensory modalities like audition or vision, taking less than one millisecond in the auditory system and more than twenty milliseconds in the visual system. Thus, auditory and visual signals arrive at different times in central structures of the brain.

Matters become more complicated by the fact that the transduction time in the visual modality is flux-dependent, since surfaces with less flux require more transduction time at the receptor surface. Thus, to see an object with areas of different brightness or to see somebody talking, different temporal availabilities of local activities within the visual modality and similarly different local activities across the two modalities engaged in stimulus processing must be overcome. For intersensory integration, aside from these biophysical problems, physical problems also have to be considered. The distance of objects to be perceived is obviously never pre-determined. Thus, the speed of sound (not of light) becomes a critical factor.

At a distance of approximately ten to twelve meters, transduction time in the retina under optimal optical conditions corresponds to the time the sound takes to arrive at the recipient. Up to this "horizon of simultaneity," auditory information is earlier than visual information; beyond this horizon, visual information arrives earlier in the brain. Again, there must be some kind of mechanism that overcomes the temporal uncertainty of information represented in the two sensory modalities. How can this problem be solved? The best way of the brain is to step out of the mode of continuous information processing.

The brain has indeed developed specific mechanisms to reduce complexity and temporal uncertainty by creating system states (possibly using neuronal oscillations) within which "Newtonian time" does not exist. Within such system states temporally and spatially distributed information can be integrated as experimental evidence shows. These states are "atempora" because the before-and-after relationship of stimuli processed within such states is not defined or definable. This biological trick implies that time does not flow continuously, but it jumps from one atemporal system state to the next.
 

eldar_shafir's picture
William Stewart Tod Professor of Psychology and Public Affairs Ph.D., Princeton University; Co-author, Scarcity

British Chef Heston Blumenthal's imaginative "Hot and Iced Tea," is a syrupy concoction that's prepared by putting a divider down the middle of a glass, then filling one side with a hot tea and the other with an iced version. Because of the viscous consistency of the liquid, when the divider is removed, the two halves keep separate long enough for a lucky diner to sample a perfectly, and simultaneously, hot and iced tea. When you sip Blumenthal's tea it makes no sense to argue about whether it's really cold or really hot. You could, of course, take care to sip only from the cold side, or only from the hot. But the cup of tea is really both.

I think much of the world, the sciences, certainly the social and behavioral sciences, look more like that cup of tea than we often let on.

We typically assume, for example, that happiness and sadness are polar opposites and, thus, mutually exclusive. But recent research on emotion suggests that positive and negative affects should not be thought of as existing on opposite sides of a continuum, and that, in fact, feelings of happiness and sadness can co-occur. When participants are surveyed immediately after watching certain films, or graduating from college, they are found to feel both profoundly happy and sad. Our emotional experience, it turns out, is a lot like a viscous cup of tea: It can run hot and cold at the same time.

The same can be true of good and evil. Like sipping from the hot or the cold side of the cup, we now know that minor contextual nuance can make all the difference. In one classic study, psychologists Darley and Batson recruited Seminary students to deliver a sermon on the parable of the Good Samaritan. While half the seminarians were told they were comfortably ahead of schedule, others were led to believe they were running late. On their way to give the talk, all participants encountered an ostensibly injured man slumped in a doorway, groaning and needing help. Whereas the majority of those with time to spare stopped to help, a mere 10% of those who were running late stopped, the rest stepping over the victim and rushing along. Notwithstanding their ethical training and biblical scholarship, the minor nuance of a time constraint proved critical to the seminarians' decision to ignore the pleas of a suffering man. Like a high-concept cup of tea, both hot and iced, each of these men were both caring and indifferent, displaying one trait or the other depending on arbitrary twists of fate.

Or consider John Rabe, the bald and bespectacled German engineer, known as "the living Buddha of Nanking." Rabe was the legendary head of the International Safety Zone, who was credited with having saved hundreds of thousands of Chinese lives during a savage Japanese occupation. On the other side of the cup, Rabe was simultaneously the leader of the Nazi party in the same city. In 1938 he assured audiences that he supported the German political system "100 percent."

In its essence, this sort of anti-Manichaean perspective posits that not only one alternative always obtains. If you believe people are only always good, or always only evil, if you think the cup is only ever hot, or only cold, well then you're just wrong - you haven't felt the cup, and you have a terribly naïve understanding of nature. But as long as your views are not that extreme, as long as you recognize the possibility of both cold and hot, then in many cases you needn't choose - it turns out they're both there.

From the little I understand, physicists question the classical distinction between wave and matter, and biologists refuse to choose between nature and nurture. But let me stay close to what I know best. In the social sciences, there is ongoing, and often quite heated, debate about whether or not people are rational, and about whether they're selfish. And there are compelling studies in support of either camp, the hot and the iced. People can be cold, precise, selfish and calculating. Or they can be hot-headed, confused, altruistic, emotional and biased. In fact, they can be a little of both; they can exhibit these conflicting traits at the very same time. People can be perfectly calibrated weather forecasters but hopelessly overconfident investors; ruthless rulers and cuddly pet owners; compassionate friends and apathetic parents. Research on decisions made in demanding contexts has found that people can be thoughtful and calculating as they focus on issues of immediate concern, but negligent and misguided when it comes to issues—sometimes very closely related and equally or more important—just at the periphery of their attention.

As we all know, history is filled with very smart people who did really stupid things, and with good people who acted horribly. Are we altruistic or selfish? Smart or stupid? Good or evil? Like that hot and iced tea, there is always a little of both—it just depends on which side you drink from.

gavin_schmidt's picture
Climatologist; Director, NASA's Goddard Institute for Space Studies

More precisely, the notion that there are simple answers to complex problems. The universe is complicated. Whether you are interested in the functioning of a cell, the ecosystem in Amazonia, the climate of the Earth or the solar dynamo, almost all of the systems and their impacts on our lives are complex and multi-faceted. It is natural for us to ask simple questions about these systems, and many of our greatest insights have come from the profound examination of such simple questions. However, the answers that have come back are never as simple. The answer in the real world is never "42".

Yet collectively we keep acting as though there are simple answers. We continually read about the search for the one method that will allow us to cut through the confusion, the one piece of data that tell us the 'truth', or the final experiment that will 'prove' the hypothesis. But almost all scientists will agree that these are fool's errands—that science is method for producing incrementally more useful approximations to reality, not a path to absolute truth.

In contrast, our public discourse is dominated by voices who equate clarity with seeing things as either good or bad, day or night, black or white. They are not simply ignoring the shades of gray, but are missing out on the whole wonderful multi-hued spectrum. By demanding simple answers to complex questions we rob the questions of the qualities that make them interesting, reducing them to cliched props for other agendas.

Scientists sometimes play into this limiting frame when we craft our press releases or pitch our popular science books, and in truth it is hard to avoid. But we should be more vigilant. The world is complex, and we need to embrace that complexity to have any hope of finding any kind of robust answers to the simple questions that we, inevitably, will continue to ask.

bruce_hood's picture
Chair of Developmental Psychology in Society, University of Bristol; Author, The Self-Illusion, Founder of Speakezee

It seems almost redundant to call for the retirement of the free willing self as the idea is neither scientific nor is this the first time that the concept has been dismissed for lacking empirical support. The self did not have to be discovered as it is the default assumption that most of us experience, so it was not really revealed by methods of scientific enquiry. Challenging the notion of a self is also not new. Freud's unconscious ego has been dismissed for lacking empirical support since the cognitive revolution of the 1950s.

Yet, the self, like a conceptual zombie, refuses to die. It crops up again and again in recent theories of decision-making as an entity with free will that can be depleted. It re-appears as an interpreter in cognitive neuroscience as capable on integrating parallel streams of information arising from separable neural substrates. Even if these appearances of the self are understood to be convenient ways of discussing the emergent output of multiple parallel processes, students of the mind continue to implicitly endorse that there is a decision-maker, an experiencer, a point of origin.

We know that the self is constructed because it can be so easily deconstructed through damage, disease and drugs. It must be an emergent property of a parallel system processing input, output and internal representations. It is an illusion because it feels so real, but that experience is not what it seems. The same is true for free will. Although we can experience the mental anguish of making a decision, our free will cannot be some kind of King Solomon in our mind weighing up the pros and cons as this would present the problem of logical infinite regress (who is inside their head and so on?). The choices and decisions we make are based on situations that impose on us. We do not have the free will to choose the experiences that have shaped our decisions.

Should we really care about the self? After all, trying to live without the self is challenging and not how we think. By experiencing, evoking and talking about the self, we are conveniently addressing a phenomenology that we can all relate to. Defaulting to the self in explanations of human behavior enables us to draw an abrupt stop in the chain of causality when trying to understand thoughts and actions. How notable that we do this all so easily when talking about humans but as soon as we apply the same approach to animals, one gets accused of anthropomorphism!

By abandoning the free willing self, we are forced to re-examine the factors that are really behind our thoughts and behavior and the way they interact, balance, over-ride and cancel out. Only then we will begin to make progress in understanding how we really operate.

stephen_j_stich's picture
Board of Governors Professor, Department of Philosophy, Rutgers University

There is a strategy for defending philosophical views that has been around since antiquity. It's used to support rules for reasoning (in science and elsewhere) and moral principles, and to defend accounts of phenomena, like knowledge, causation and meaning. Recent findings have made it increasingly clear that, after 2500 years, it's a strategy ready for retirement.

Here's how it works. A case, sometimes real, often imaginary, is described, and the philosopher asks: What we would say about that case? Does the protagonist in the story really have knowledge? Is the behavior of the protagonist morally permissible? Did the first event cause the second? When things go well, the philosopher and his audience will make the same spontaneous judgment about the case.

Contemporary philosophers call those judgments "intuitions" And in philosophical theorizing, our intuitions are an important source of evidence. If a philosopher's theory comports with our intuition, the theory is supported; if the theory entails the opposite judgment, the theory is challenged. If you have ever taken a philosophy course, you'll likely find this method very familiar. But it's not just a method that philosophers use in the classroom. At a recent colloquium in my department, I sat in the back and counted the appeals to our intuition made by a rising star in the philosophical profession during a 55 minute talk. There were 26—roughly one every two minutes.

That's a lot of intuition mongering, though it is hardly unusual in contemporary philosophy. Another thing about the talk that was not at all unusual was that the speaker never once told us who "we" are. When a philosopher makes claims about "our" intuitions about knowledge or causation or moral permissibility, whose intuitions is he talking about? Until very recently, philosophers have almost never confronted that question. But if they had, their answer would likely have been very inclusive. The intuitions we use as evidence in philosophy are the intuitions that all rational people would have, provided they are paying attention and have a clear understanding of the case that evokes the intuition. According to contemporary defenders of this methodology, intuitions are rather like perceptions. They are shared by just about everyone.

Some of us have long thought that there was room for a fair amount of skepticism here. How could philosophers, seated comfortably in their armchairs, be so confident that all rational people share their intuitions? This skepticism was reinforced with the emergence of cultural psychology over the last three decades. Culture, it turns out, runs deep, and it affects a wide array of psychological processes, ranging from reasoning to memory to perception.

Moreover, in an important article, Henrich, Heine and Norenzayan have made a persuasive case that WEIRD people—people in cultures that are Western, Educated, Industrialized, Rich and Democratic are outliers on a wide range of psychological tasks. WEIRD people, they argue, are "the weirdest people in the world." And philosophers are overwhelmingly WEIRD. They are also overwhelmingly white, predominantly male and have all survived years of undergraduate and graduate training in settings where people who don't share the professionally favored intuitions are sometimes at a considerable disadvantage. Could it be that these factors, singly or in combination, explain the fact that professional philosophers, and their successful students, share lots of intuitions?

About a decade ago, this question led a group of philosophers, along with sympathetic colleagues in psychology and anthropology, to stop assuming that their intuitions were widely shared and design studies to see if they really are. In study after study, it turned out that philosophical intuitions do indeed vary with culture and other demographic variables. A great deal more work will be needed before we have definitive answers about which philosophical intuitions vary, and which, if any, are universal.

There are lots of important intuitions to look at, lots of cultural and demographic groups to consider, and lots of methodological pitfalls to discover and avoid. But, not surprisingly, the early efforts of these "experimental philosophers" have not been warmly welcomed by philosophers deeply invested in the traditional intuition-based method. One leading philosopher proclaimed that experimental philosophers "hate philosophy." He and others have also staked out a fallback position which insists that it doesn't much matter what we discover about the intuitions of ordinary people, or of people in other cultures, because professional philosophers are the experts in making judgments about knowledge, morality, causation and the rest, so only their intuitions are to be taken seriously. 

It will be a long time before the dust settles in this dispute. But one conclusion on which perhaps most of those involved can agree is that it's time to stop talking about "our" intuitions without bothering to say who "we" are.   
 

david_m_buss's picture
Professor of Psychology, University of Texas, Austin; Author, When Men Behave Badly

For most of the past century, mainstream social scientists have assumed that attractiveness is superficial, arbitrary, and infinitely variable across cultures. Many still cling to these views. Their appeal has many motivations. First, beauty is undemocratically distributed, a violation of the belief that we are all created equal. Second, if physical desirability is superficial ("you can't judge a book by its cover"), its importance can be denigrated and dismissed, taking a back seat to deeper and more meaningful qualities. Third, if standards of beauty are arbitrary and infinitely variable, they can be easily changed.

Two movements in the 20th century seemed to lend scientific support for these views. The first was behaviorism. If the content of human character was built through experienced contingencies of reinforcement during development, those contingencies must have created standards of attractiveness. The second was seemingly astonishing ethnographic discoveries of cross-cultural variability in attractiveness. If the Maori in New Zealand found particular types of lip tattoos attractive and the Yanomamo of the Amazon rain forest prized nose or cheek piercings, then surely all other beauty standards must be similarly arbitrary.

The resurgence of sexual selection theory in evolutionary biology, and specifically the importance of preferential mate choice, created powerful reasons to question the theoretical position long held by social scientists. We now know that in species with preferential mate choice, from scorpionflies to peacocks to elephant seals, physical appearance typically matters greatly. It conveys critical reproductively valuable qualities such as health, fertility, dominance, and 'good genes.' Are humans a bizarre exception to all other sexually reproducing species?

Evolutionary theorizing, long antedating the hundreds of empirical studies on the topic, suggested that we were not. In mate selection, Job One, as someone in business might say, is the successful selection of a fertile partner. Those who failed to find fertile mates left no descendants. Everyone alive today is the product of a long and literally unbroken line of ancestors who succeeded. If any had failed at the critical task, we would not be here today. As evolutionary success stories, each modern human has inherited the mate preferences of their successful ancestors.

Cues recurrently observable to our ancestors that were reliably, statistically, probabilistically correlated with fertility, according to this theory, should become part of our evolved standards of beauty. In both genders, these include cues to health—symmetrical features and absence of sores and lesions, for example. Since fertility is sharply age-graded in women, more so than in men, cues to youth should figure prominently in gender-specific standards of attractiveness. Clear skin, full lips, an unclouded sclera, feminine estrogen-dependent features, a low waist-to-hip ratio, and many other cues to female fertility are now known to be pieces of the puzzle of universal standards of female beauty.

Women's evolved standards of male attractiveness are more complex. Masculine features, hypothesized to signal healthy immune functioning in men, are viewed as attractive more by women seeking short-term than long-term mates, more when women are ovulating than when in the luteal phase of their menstrual cycle, and more by women higher in mate value, perhaps because of their ability to attract and control such men. Women's judgments of men's attractiveness are more dependent on multiple contexts—cues to social status, the attention structure, positive interactions with babies, being seen with attractive women, and many others. The greater complexity and variability of what women find attractive in men is reflected in another key empirical finding—there is far less consensus among women about which men are attractive than among men about which women are attractive.

The theory that 'beauty is in the eyes of the beholder' in the sense of being superficial, arbitrary, and infinitely culturally variable can safely be discarded. I regard it as one of the 'great myths' perpetrated by social scientists in the 20th century. Its scientific replacement—that beauty is 'in the adaptations of the beholder' as anthropologist Donald Symons phrases it—continues to be disturbing to some. It violates some of our most cherished beliefs and values. But then so did the notion that the earth was not flat or the center of the universe.

laurence_c_smith's picture
Professor of Environmental Studies, Brown University; Author, Rivers of Power

Stationarity—the assumption that natural-world phenomena fluctuate with a fixed envelope of statistical uncertainty that doesn't change over time—is a widely applied scientific concept that is ready to be retired.

It had a good run. For more than a century, stationarity has been used to inform countless decisions aimed at the public good. It guides the planning and building codes for places susceptible to wildfires, floods, earthquakes, and hurricanes. It is used to determine how and where homes may be built, the structural strength of bridges, and how much premium people should pay for their homeowner's insurance policies. Crop yields are forecasted and, in the developed world, insured against catastrophic failures. And as more weather stations and river level gages are built and accumulate ever-longer data records, our abilities to make such calculations get better. This saves lives and a great deal of money.

But a growing body of research shows that stationarity is often the exception, not the norm. As new satellite technologies scan the earth, more geological records are drilled, and the instrument records lengthen, they commonly reveal patterns and structures quite inconsistent with a fixed envelope of random noise. Instead, there are transitions to different quasi-stable states, each characterized by a different set of physical conditions and associated statistical properties. In climate science, for example, we have discovered multi-decadal patterns like the Pacific Decadal Oscillation (PDO), an El Niño-like phenomenon in the north Pacific that triggers far-reaching changes in climate averages that persist for decades (for example during the 20th century the PDO experienced a "warm" phase from 1922-1946 and 1977-1998, and a "cool" phase from 1947-1976) with far-reaching impacts on water resources and fisheries. And anthropogenic climate change, induced by our steady ramping up of greenhouse gas concentrations in the atmosphere, is by its very definition the opposite of a fixed, stationary process. This imperils the basis of many societal risk calculations because as the statistical probabilities of the past break down, we enter a world that operates outside of expected and understood norms.

This recognition is not new among scientists, but has been surprisingly slow to penetrate into the practical world. For example, even as awareness and acceptance of climate change has grown, stationarity continues to serve as a central, default assumption in water-resource risk assessment and planning. Floodplain zoning continues to be designed around stationary concepts like the 100- and 500-year flood, despite known impacts of land use conversion and urbanization on water runoff and the anticipated impacts of anthropogenic climate change. The civil engineering profession and most regulatory agencies around the world have been slow to acknowledge these changes and seek new approaches to address them. But viable alternatives exist, for example using the precautionary, no-regrets "probable maximum flood" (PMF) method to design dams and bridges, and incorporation of more flexible "subjectivist Bayesian" probabilities in societal risk calculations.

We can do better. Stationarity is dead, especially for our understanding of the world's water, food security, and climate. 

nigel_goldenfeld's picture
Physicist, University of Illinois at Urbana-Champaign

In physics, we use the convention that the suffix "-on" denotes a quantized unit of something. For example, in classical physics, there are electromagnetic waves. But in the quantum version of the theory, originating with Einstein's 1905 Nobel Prize winning work, we know that under certain circumstances, it is more precise to regard electromagnetic radiant energy as being distributed in particles called photons. This "wave-particle duality" is the underpinning of modern physics: not just photons, but a zoo of what were once called elementary particles that include protons, neutrons, pions, mesons, and of course the Higgs boson. (Neutrino? It's a long story …).

And what about you? You are a person. Are you a quantum of something too? Well, clearly there are no fractional humans, and we are trivially quantized. But elementary particles, or units, are useful conceptually because they can be considered in isolation, devoid of interactions, like point particles in an ideal gas. You would certainly not fulfil that description, networked, online, and cultured as you undoubtedly are. Your strong interactions with other humans mean that your individuality is complicated by the fact that you are part of a society, and can only function properly in such a milieu. We could go further and say that you are a quantum of a spatially-distributed field, but one that describes the density around each point in space of humans, rather than the electromagnetic field intensity. This description turns out to be technically very powerful for describing the behavior of ecosystems in space and time, particularly to describe extinction, where discontinuous change is important. It seems apt to invoke here the strangely oxymoronic term "Indivi-duality", a counterpart to wave-particle duality.

The notion of individual has several other connotations. It can mean discrete or single, but its etymology is also reminiscent of "indivisible". Clearly we are not indivisible, but are constituted from cells, themselves constituted of cytoplasm, nucleic acids, proteins etc., themselves constituted of atoms, which contain neutrons, protons, electrons, all the way down to the elementary particles which themselves are now believed to be products of string theory, itself known now not to be a final description of matter. In other words, it's "turtles all the way down", and there are no indivisible units of matter, no meaning to the notion of elementary particle, no place to stop. Everything is made of something, and so on ad infinitum.

However, this does not mean that everything is simply the sum of its parts. Take the proton for example, made up of three quarks. It has a type of intrinsic angular momentum called spin, which was initially expected to be the sum of that of its constituent quarks. Yet experiments carried out over the last 20-30 years have shown clearly that this is not the case: the spin arises out of some shared collective aspect of the quarks and short-lived fluctuating particles called gluons. The notion of individual quarks is not useful when the collective behavior is so strong. The proton is made of something, but its properties are not found by adding up the properties of its parts. When we try to identify the something, we discover that, as with Los Angeles, there is no "there" there.

You probably already knew that naïve reductionism is often too simplistic. However, there is another point. It's not just that you are composite, something you already knew, but you are in some senses not even human. You have perhaps a hundred trillion bacterial cells in your body, numbering ten times more than your human cells, and containing a hundred times as many genes as your human cells. These bacteria are not just passive occupants of the zoo that is you. They self-organize into communities within your mouth, guts and elsewhere; and these communities—microbiomes—are maintained by varied, dynamic patterns of competition and cooperation between the different bacteria, which allow us to live.

In the last few years, genomics has given us a tool to explore the microbiome by identifying microbes by their DNA sequences. The story that is emerging from these studies is not yet complete but already has led to fascinating insights. Thanks to its microbes, a baby can better digest its mother's milk. And your ability to digest carbohydrates relies to a significant extent on enzymes that can only be made from genes not present in you, but in your microbiome. Your microbiome can be disrupted, for example due to treatment by antibiotics, and in extreme cases can be invaded by dangerous monocultures, such as Clostridium difficile, leading to your death. Perhaps the most remarkable finding is the gut-brain axis: your gastrointestinal microbiome can generate small molecules that may be able to pass through the blood-brain barrier and affect the state of your brain: although the precise mechanism is not yet clear, there is growing evidence that your microbiome may be a significant factor in mental states such as depression and autism spectrum conditions. In short, you may be a collective property arising from the close interactions of your constitutents.

Now, maybe it is true then that you are not an individual in one sense of the word, but how about your microbes? Well, it turns out that your microbes are a strongly interacting system too: they form dense colonies within you, and exchange not only chemicals for metabolism, but communicate by emitting molecules. They can even transfer genes between themselves, and in some cases do that in response to signals emitted by a hopeful recipient: a bacterial cry for help! A single microbe in isolation does not do these things; thus these complex behaviors are a property of the collective, and not the individual microbes. Even microbes that would seem to be from the same nominal species can have genomes which differ in content by as much as 60% of their genes! So much for the intuitive notion of species! That’s another too-anthropomorphic scientific idea that does not apply to most of life.

Up to now I talked about connections in space. But there are also connections in time. If the stuff that makes the universe is strongly connected in space, and not usefully thought of as the aggregate sum of its parts, then attributing a cause of an event to a specific component may also not be meaningful. Just as you can't attribute the spin of a proton to any one of its constituents, you can't attribute an event in time to a single earlier cause. Complex systems have neither a useful notion of individuality nor a proper notion of causality.

frank_tipler's picture
Professor of Mathematical Physics, Tulane University; Coauthor (with John Barrow), The Anthropic Cosmological Principle

In his Scientific Autobiography, Max Planck recalls that he was unable to persuade the chemist Wilhelm Ostwald that the Second Law of Thermodynamics could not be deduced from the First Law of Thermodynamics. "This experience gave me also an opportunity to learn a fact—a remarkable one, in my opinion: A new scientific truth does not triumph by convincing it opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." Planck also wrote of his conflict with Ostwald: "It is one of the most painful experiences of my entire scientific life that I have but seldom—in fact, I might say, never— succeeded in gaining universal recognition for a new result, the truth of which I could demonstrate by a conclusive, albeit only theoretical proof. This is what happened this time, too. All my sound arguments fell on deaf ears. It was simply impossible to be heard against the authority of men like Ostwald, Helm, and Mach."

Fortunately, Planck was able to obtain universal recognition for his Radiation Law, again not by his theoretical proof, but by experimental confirmation.

There has been a tendency among theoretical physicists, particularly string theorists, to downplay the importance of experimental confirmation in recent years. Many have even claimed that Copernicus was not superior in predictive power over Ptolemy. I myself decided to check this claim, by looking at Tycho's notebooks. I discovered that between 1564 and 1601, Tycho compared Copernicus's predictions and Ptolemy's predictions with his own observations 294 times. As I expected, Copernicus was superior. So Copernicus' theory was confirmed as experimentally superior to Ptolemy long before Galileo. So I have put the Copernicus-was-no-better-than-Ptolemy idea to the (historical) experimental test, and found that it is false: Copernicus Trumps Ptolemy.

As it was in the beginning of modern science, so it should be now. We should keep the fundamental requirement that experimental confirmation is the hallmark of true science. Since string theorists have failed to propose any way to confirm string theory experimentally, string theory should be retired, today, now.

steve_giddings's picture
Theoretical Physicist; Professor, Department of Physics, University of California, Santa Barbara

Physics has always been regarded as playing out on an underlying stage of space and time. Special relativity joined these into spacetime, and general relativity taught us that this spacetime itself bends and ripples—but it has remained part of the foundations of physics. However the need to give a quantum-mechanical description of reality challenges the very notion that space and time are fundamental.

We specifically face the problem of reconciling the principles of quantum mechanics with the physics of gravity. At first, physicists believed this meant that spacetime could violently fluctuate to the point of losing meaning—though only at extremely short distances. But attempts to reconcile quantum principles with gravitational phenomena indicate a more profound challenge to the foundational role of spacetime. This comes to the fore when studying both black holes and evolution of the Universe. Spacetime structure is seemingly problematic also at very long distances. 

Quantum mechanics appears to be an inevitable aspect of physics and is remarkably resistant to modification. If quantum principles govern nature, it seems likely that spacetime arises from more fundamentally quantum structures. This is the theme that spacetime is emergent, perhaps roughly similar to the emergence of fluid behavior from the interactions of atoms. 

The problem with fundamental spacetime is even more strongly hinted at from multiple developing perspectives. Notable among these hints is the physics of black holes, where it appears that evolution that respects quantum principles must violate the classical spacetime dictum that information does not propagate faster than the speed of light. Something is apparently very wrong with the standard spacetime picture. Additional evidence mounts when one considers the large-scale structure of the Universe, given quantum principles and the presence of dark energy. Here ultimately spacetime undergoes strong quantum fluctuations at very long scales, and seems to lose meaning. More hints have come from candidate mathematical approaches to fluctuating spacetime.

The apparent need to retire classical spacetime as a fundamental concept is profound, and confronts the reality that a clear successor is not yet in sight. Different approaches to the underlying quantum framework exist; some show promise but none yet clearly resolve our decades-old conundrums in black holes and cosmology. The emergence of such a successor is likely to be a key element in the next major revolution in physics.

simon_baron_cohen's picture
Professor of Developmental Psychopathology, University of Cambridge; Fellow, Trinity College, Cambridge; Director, Autism Research Centre, Cambridge; Author, The Pattern Seekers

Every student of psychology is taught that Radical Behaviorism was displaced by the cognitive revolution, because it was deeply flawed scientifically. Yet it is still practiced in animal behavior modification, and even in some areas of contemporary human clinical psychology. Here I argue that the continued application of Radical Behaviorism should be retired not just on scientific but also on ethical grounds.

The central idea of Radical Behaviorism—that all behavior can be explained as the result of learned associations between a stimulus and a response, reinforced or extinguished through reward and/or punishment—stems from the early 20th century psychologists B.F. Skinner (at Harvard) and John B. Watson (at John Hopkins). Radical Behaviorism came under public attack when Skinner's book Verbal Behavior (published in 1957) received a critical review by cognitivist-linguist Noam Chomsky in 1959 in the journal Language. One of Chomsky's scientific arguments was that no amount of exposure to language, and no amount of reward and reinforcement, was going to lead a dog to talk or understand language; whereas for a human infant, despite all the noise in different environments, language learning universally unfolds. This implies there is more to behavior than just learned associations. There are evolved neurocognitive mechanisms.

At times, this debate was portrayed as if it was between nativism (Chomsky clearly stated that just as an embryo grows, so language unfolds, under a universal genetic program) vs. empiricist proponents of tabula rasa (Skinner was painted as if he believed the newborn human mind was no more than a blank slate, although this was something of a straw man, since in at least one interview Skinner clearly acknowledged the role of genetics).

My scientific reason for arguing for Radical Behaviorism should be retired is not to revisit the now stale nature-nurture debate (all reasonable scientists recognize an organism's behavior is the result of an interaction of these), but rather because Radical Behaviorism is scientifically uninformative. Behavior by definition is the surface level, so it follows that the same piece of behavior could be the result of different underlying cognitive strategies, different underlying neural systems, and even different underlying causal pathways. Two individuals can show the same behavior but can have arrived at it through very different underlying causal routes. Think of a native speaker of English vs. someone who has acquired total fluency of English as a second language; or think of a person who is charmingly polite because they are genuinely considerate to others, vs. a psychopath who has learnt how to flawlessly perform being charmingly polite. Identical behavior, produced via different routes. Without reference to underlying cognition, neural activity, and causal mechanisms, behavior is scientifically uninformative.

Given these scientific arguments, you'd have thought Radical Behaviorism would have been retired long ago, and yet it continues to be the basis of 'behavior modification' programs, in which a trainer aims to shape another person's or an animal's behavior, rewarding them for producing surface behavior whilst ignoring their underlying evolved neurocognitive make-up. Over and above the scientific reasons for retiring Radical Behaviourism, I have an ethical reason too.

Lori Marino at Emory University has conducted research at the interface of neuroscience and ethics and examined the life of an orca (a "killer whale") captured in 1983 in Iceland and brought to Sealand of the Pacific, a theme park in British Columbia, and later moved to SeaWorld Orlando in Florida. The orca was trained to do tricks, such as nodding his head in imitation of the trainer nodding her head, or waving his fin in imitation of the trainer waving her hand. The orca dutifully produced the behaviors to get the rewards (food) but, over the years in captivity, he was involved in 3 deaths of people. It has never been documented that orcas have killed a human in the wild, so this may have been a reaction to the Radical Behaviorists who were training this orca to show new behaviors, whilst ignoring millions of years of evolved social and emotional neurocognitive circuitry in the animal's brain, circuitry that does not just vanish in captivity.

Orcas are highly social. They live in family groups and complex societies comprised of 'clans', each with their own unique vocalization dialect which likely functions to strengthen group identity. They hunt in groups, a sign of their remarkable capacity for social coordination, and both males and females contribute to childcare. Kidnapping one individual orca and placing him or her in captivity not only isolates the animal from their social community, but it reduces their life expectancy, and causes signs of ill-health, such as the frequent collapse of the dorsal fin. The use of Radical Behaviourism towards such animals in captivity is doubly unethical, because of its lack of respect for the animal's real nature. The focus on shaping surface behavior ignores who or what the animal really is.

There may be ethical lessons here when we think about the still widespread use of behavior modification of humans in contemporary clinical settings: the need to respect how a person thinks and feels, respecting their real nature, rather than simply focusing on whether they can be trained to change their surface behavior.

michael_i_norton's picture
Harold M. Brierley Professor of Business Administration, Director of Research, Harvard Business School; Co-author (with Elizabeth Dunn), Happy Money

Markets can have terrible consequences. Take just one example. In an ingenious experiment, researchers showed that people who enter a market where the lives of animals were priced as commodities were more likely to devalue the lives of those animals—treating those lives as nothing more than opportunities for profit.

Markets can have uplifting consequences. Take just one example. In a series of investigations, researchers show that efficient markets have contributed to the development of countless life-saving drugs (albeit sometimes with a little governmental help), bettering the lives of billions.

Yet in popular and scientific discourse, it is uncommon to see markets described as anything except truly evil and fundamentally flawed (left-leaning pundits and scholars), or truly perfect and self-correcting (right-leaning pundits and scholars).

It is time to retire both theories: that markets are good, and that markets are bad.

Taking a step back and remembering what markets are—an aggregation of many individuals—makes it obvious that markets are very unlikely to be good or bad. Replace the word "markets" with another shorthand term for an aggregation of individuals: "groups." We certainly don't view groups as good or bad. Groups are capable of amazing selflessness, generosity, and heroism; they are also capable of selfishness, greed, and cruelty. They are capable of amazing performance (think of Bell Labs); they are also capable of terrible performance (think of the many dysfunctional groups of which you have been a member).

When we think of groups, we think of the conditions under which groups are likely to behave well or behave poorly. We don't often think of them as self-correcting, as always performing well over time, or most importantly, as either inherently good or inherently bad.

Applying the same logic to markets—think of them in this context as "groups writ large"—will assist with the development of a richer and more accurate theory of when and why markets are likely to have terrible or uplifting consequences.

stephen_m_kosslyn's picture
Founding Dean, Minerva Schools at the Keck Graduate Institute

Solid science sometimes devolves into pseudoscience, but the imprimatur of being science nevertheless may remain. No better example of this is the popular "left brain/right brain" narrative about the specializations of the cerebral hemispheres. According to this narrative, the left hemisphere is logical, analytic, and linguistic whereas the right is intuitive, creative, and perceptual. Moreover, each of us purportedly relies primarily on one half-brain, making us "left-brain thinkers" or "right-brain thinkers."

This characterization is misguided, and it's time to put it to rest.

Two major problems can be identified at the onset:

First, the idea that each of us relies primarily on one or the other hemisphere is not empirically justifiable. The evidence indicates that each of us uses all of our brain, not primarily one side or the other. The brain is a single, interactive system, with the parts working in concert to accomplish a given task.

Second, the functions of the two hemispheres have been mischaracterized. Without question, the two hemispheres engage in some different kinds of information processing. For example, the left preferentially processes details of objects we see whereas the right preferentially processes the overall shape of objects we see; the left preferentially processes syntax (the literal meaning), the right pragmatics (the indirect or implied meaning) and so forth. Our two hemispheres are not like our two lungs: One is not a "spare" for the other, redundant in function. But none of these well-documented hemispheric differences come close to what's described in the popular narrative.

It is time to move past the popular but incorrect left brain/right brain narrative.
 

mary_catherine_bateson's picture
Professor Emerita, George Mason University; Visiting Scholar, Sloan Center on Aging & Work, Boston College; Author, Composing a Further Life

 

Scientists sometimes resist new ideas and hang on to old ones longer than they should, but the real problem is the failure of the public to understand that the possibility of correction or disproof is a strength and not a weakness. We live in an era when it is increasingly important that the voting public be able to evaluate scientific claims and be able to make analogies between different kinds of phenomena, but this can be a major source of error. The process by which scientific knowledge is refined is largely invisible to the public. The truth-value of scientific knowledge is dependent upon its openness to correction, yet we all carry around ideas that science has long since revised—and are disconcerted when asked to abandon them. Surprise: you will not necessarily drown if you go swimming after lunch.

 

A blatant example is the role of competition in evolution, which is treated by many as a scientifically established law of nature, and often taken for granted by economists and psychologists...at the same time that others argue that evolution, being a "theory" is no more than a "guess." Biology has been steadily giving increasing recognition to the importance of symbiosis in evolution, alongside competition, as well as diversification that by-passes competition, but "the survival of the fittest," a metaphor drawn by Darwin from the description of early industrial society by Herbert Spencer, survives as a binding metaphor for human behavior.

Most people are not comfortable with the notion that knowledge can be authoritative, can call for decision and action, and yet be subject to constant revision, because they tend to think of knowledge as additive, not recognizing the necessity of reconfiguring in response to new information. It is precisely this characteristic of scientific knowledge that encourages the denial of climate change and makes it so difficult to respond to what we do know in a context where much is still unknown. 

What kind of evidence will convince the doubters of the reality of what might best be called climate disruption? Perhaps the exploration of scientific ideas in need of retirement should be an annual event, with a clear emphasis on the fact that each new synthesis of complex data is potentially more inclusive. Retiring concepts that no longer fit is not primarily a matter of eliminating error but of integrating new information and newly recognized connections into our understanding.     
 

roger_highfield's picture
Director, External Affairs, Science Museum Group; Co-author (with Martin Nowak), SuperCooperators

Politicians, poets, philosophers and the religious often like to talk about the truth. In contrast, most scientists would think it overblown to describe a field of research as being 'true', though they do all seek the truth of mathematics: for example, quantum theory is true in the sense that experiment after experiment supports its predictions about how the world works, no matter how odd, unsettling or counterintuitive.

In the same way, when I studied chemistry at university, I was never told about the truth of the Periodic Table, though I did marvel at how Mendeleev had glimpsed the electronic structure of atoms. But why do some biologists talk about the truth so much when it comes to evolution? After all, one can hardly say that everything that is written about evolution is "true". But it is a mistake to counter irrational beliefs with rhetoric about the Truth.

Intelligent design and other Creationist critiques have been easily shrugged off and the facts of evolution well established in the laboratory, fossil record, DNA record and computer simulations. If evolutionary biologists are really Seekers of the Truth, they need to focus more on finding the mathematical regularities of biology, following in the giant footsteps of Sewall Wright, JBS Haldane, Ronald  Fisher and so on.

The messiness of biology has made it relatively hard to discern the mathematical fundamentals of evolution. Perhaps the laws of biology are deductive consequences of the laws of physics and chemistry. Perhaps natural selection is not a statistical consequence of physics, but a new and fundamental physical law. Whatever the case, those universal truths—'laws'—that physicists and chemists all rely upon appear relatively absent from biology.

Little seems to have changed from a decade ago when the late and great John Maynard Smith wrote a chapter on evolutionary game theory for a book on the most powerful equations of science: his contribution did not include a single equation. 

Yet there are already many mathematical formulations of biological processes and evolutionary biology will truly have arrived the day that high school students learn the Equations of Life in addition to Newton's Laws of Motion.

Moreover, if physics is an example of what a mature scientific discipline should look like, one that does not waste time and energy combating the agenda of science-rejecting creationists, we also need to abandon the blind adherence to the idea that the mechanisms of evolution are Truths that lie beyond discussion.

Gravity, like evolution, exists but Newton’s view of gravitation was absorbed into another view that Einstein devised a century ago. Even today, however, there is debate about whether our understanding of gravity will have to be modified again, when we are finally enlightened about the nature of the dark universe.


robert_provine's picture
Professor Emeritus, University of Maryland, Baltimore County; Author, Curious Behavior: Yawning, Laughing, Hiccupping, and Beyond

We fancy ourselves intelligent, conscious and alert, and thinking our way through life. This is an illusion. We are deluded by our brain's generation of a sketchy, rational narrative of subconscious, sometimes irrational or fictitious events that we accept as reality. These narratives are so compelling that they become common sense and we use them to guide our lives. In cases of brain damage, neurologists use the term confabulation to describe a patient's game but flawed attempt to produce an accurate narrative of life events. I suggest we be equally wary of everyday, non-pathological confabulation and retire the common sense hypothesis that we are rational beings in full conscious control of our lives. Indeed, we may be passengers in our body, just going along for the ride, and privy only to second-hand knowledge of our status, course and destination.

Behavioral and brain science detects chinks in our synthetic, neurologically generated edifice of reality. Research on sensory illusions indicates that percepts are simply our best estimate of the nature of physical stimuli, not a precise rendering of things and events. The image of our own body is an oddly shaped product of brain function. Memory of things past is also fraught with uncertainty; it is not the reading-out of information from the brain's neurological data bank, but an ongoing construct subject to error and bias. The brain also makes decisions and initiates action before the observer is consciously aware of detecting and responding to stimuli. My own research found that people confabulate narratives to rationalize their laughter, such as "It was funny," or "I was embarrassed," neglecting laughter's involuntary nature and frequent contagiousness.

Our lives are guided by a series of these guesstimates about the behavior and mental state of ourselves and others that, although imperfect, are adaptive and sufficiently accurate to enable us to muddle along. However, as scientists, we demand more than default explanations based on common sense. Behavioral and brain science provides a path to understanding that challenges the myths of mental life and everyday behavior. One of its delights is that it often turns reality on its head, revealing hidden processes and providing revelations about who we are, what we are doing, and where we are going.

brian_knutson's picture
Professor of Psychology and Neuroscience; Stanford University

Some still assume that emotion is peripheral, but the time has come to recognize that emotion is central.

The claim that emotion is peripheral can be taken both literally and figuratively. From a literal standpoint, experts have argued about which physiology is necessary for emotional experience since the birth of experimental psychology in the Gilded Age. On the one hand, in his seminal essay "What is an emotion?" William James counterintuitively argued that when we encounter a bear, peripheral (i.e., below the neck) physiological changes occur (e.g., the stomach clenches, heart pounds, and skin sweats) which then generate an experience of emotion (e.g., fear). By implication, peripheral responses must occur before the feeling of fear.

On the other hand, his Harvard colleague Walter Cannon countered that brain activity causes both emotional experience and peripheral responses. Cannon based his argument on research (e.g., in which emotional responses could be evoked by stimulating the brains of cats, who continued to show those emotional responses after spinal cord lesions), as well as on physiological logic (i.e., peripheral responses were too slow, insensitive, and undifferentiated to drive emotional experience). Thus, although James was a creative thinker and persuasive writer, he reasoned from the armchair, whereas the stolid and understated Cannon (who also innovated influential concepts such as "homeostasis" and "fight or flight") brought data to bear on the debate.

I seem to keep revisiting this century-old academic scuffle. That's because peripheralist assumptions still form the backbone of many modern emotion theories (e.g., in the form of peripheral somatic signals, or embodiment, or indeed any sensory process purported to mediate emotion). Of course, peripheral responses can modulate emotion—but they are simply not fast or specific enough to mediate the kinds of rapid emotional responses that ensured our ancestors' survival. Emotion also undoubtedly generates peripheral responses, but without information about which came first, correlated action does not imply causal direction. To be fair to the peripheral view, scientists presently lack a quantitative computational model of exactly how the brain generates emotion, and the neural mechanisms are still being worked out. But as the next few years of brain stimulation, lesion, and imaging evidence accumulates, I am betting that the central account of emotion will prevail.

From a figurative standpoint, the problematic assumptions of emotional peripheralism run deeper. An even older debate focuses on emotion's function rather than structure. Specifically, is emotion peripheral or central to mental function? A peripheralist viewpoint might posit that emotion does not influence or even disrupts mental function. While the historical roots of such an assumption may reach back as far as Zoroastrian dualism, Rene Descartes typically gets the blame for importing dualism from the church to science. Descartes split the mind and passions by placing the mind with the spirit but the passions with the body (where they took the form of "animal spirits" purported to move the pineal gland). According to Cartesian mind-body dualism, the mind could thus operate independently from disruptions of excessive passions.

In contrast to this peripheralist vision, a distinct depiction of the centrality of emotion to mental function comes not from the West but rather from the East. The Tibetan Buddhist "Wheel of Life," represents passionate attachments as animals that occupy the hub of a spinning wheel, driving thought and behavior. In both schemes, excessive passions can divert thought and action, but in Descartes' scheme, emotion disrupts the mind from the periphery, whereas in the Buddhist scheme emotion drives the mind from the center. If emotion is central to mental function, then our inherited scientific map of the mind is inside-out.

Indeed, the absence of emotion pervades modern scientific models of the mind. In the most popular mental metaphors of social science, mind as reflex (from behaviorism) explicitly omits emotion, and mind as computer (from cognitivism) all but ignores it. Even when emotion appears in later theories, it is usually as an afterthought—an epiphenomenal reaction to some event that has already passed. But over the past decade, the rising field of affective science has revealed that emotions can precede and motivate thought and behavior.

Emerging physiological, behavioral, and neuroimaging evidence suggests that emotions are proactive as well as reactive. Emotional signals from the brain now yield predictions about choice and mental health symptoms, and may soon guide scientists to specific circuits that confer more precise control over thought and behavior. Thus, the price of continuing to ignore emotion's centrality to mental function could be substantial. By assuming the mind is like a bundle of reflexes, a computer program, or even a self-interested rational actor, we may miss out on significant opportunities to predict and control behavior—both in individuals and groups.

Literally and figuratively, we should stop relegating emotion to the periphery, and move emotion to the center—where it belongs.
 

buddhini_samarasinghe's picture
Molecular Biologist

It is a statistical fact that you are more likely to die while horseback riding (1 serious adverse event every ~350 exposures) than from taking Ecstasy (1 serious adverse event every ~10,000 exposures). Yet, in 2009, the scientist who said this was fired from his position as the chairman of the UK's Advisory Council on the Misuse of Drugs. Professor David Nutt's remit was to make scientific recommendations to government ministers on the classification of illegal drugs based on the harm they can cause. He was dismissed because his statement highlighted how the UK Government's policies on narcotics are at odds with scientific evidence. Today, the medical use of drugs such as cannabis remains technically illegal.

Such incidents of silencing are sadly commonplace when it comes to politically controversial scientific topics. The US Government muzzled climate scientists in a similar manner in 2007, when it was reported that 46% of 1600 surveyed scientists were warned against using terms like "global warming" and 43% said their published work had been revised in ways that altered their conclusions. US preparations for oncoming climate change were checked as a result, a failing that persists today. Going back further, the story of Nikolai Vavilov is chilling. Vavilov was a plant geneticist in the Soviet Union under Joseph Stalin. He was jailed in 1940 for criticizing the pseudo-scientific views of Trofim Lysenko, a protégé of Stalin. Vavilov died of starvation in prison a few years later; scientific dissent from Lysenko's "theories" of Lamarkian inheritance was outlawed in 1948. Soviet agriculture languished for decades because of Lysenkoism; meanwhile famine decimated the population.

The scientific method is defined by the Oxford English Dictionary as "a method or procedure...consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses". It is our finest instrument for unearthing the truth. Applied correctly it is blind to and corrects for our inherent biases. Scientists are trained to wield this formidable tool in their quest to understand the universe around us. The truths they uncover can be at odds with our current beliefs; but when the facts (based on evidence and arrived at through rigorous testing) change, minds also need to change.

I use the examples above of sidelined scientists to illustrate the consequences of excluding science from the policy making process. But sometimes the sidelining is self-imposed: scientists can be genuinely reluctant to get involved in such activity and instead prefer to focus on gathering data and publishing results.

There is a tacit understanding, a custom in the culture of science, that scientists practice the scientific method in the confines of the Ivory Tower. Scientists are seen as impartial, aloof, individuals with a single-minded focus on their work and out of touch with the realities of the world around them. Scientists are expected to only do science, to find the truth and then leave it up to everyone else to decide what to do with it.

This is untenable. Scientists have a moral obligation to engage with the public about their findings; to advise and speak out on policy, and to critique its consequent implementation. Science impacts on the life of every single species on our planet. It is ludicrous that the very people who discover the facts are not part of any subsequent policy-making dialogue. Science needs to be an essential component of the public discourse; currently it is not. The consequences of that disconnect can be dire, as evinced by the criminalization of drugs that can provide relief to sufferers of chronic pain, troubling delays in programs of vital national importance, and the famine that slaughtered millions of Soviet citizens under Stalin's regime.

Scientists should not simply stick to doing science. Perhaps we need to extend the scientific method to include a requirement for communication. Young scientists should be taught the value and necessity of communicating their findings to the general public. Scientists should not shy away from controversy, because some topics should not be controversial to begin with. The scientific evidence for the efficacy of vaccines, the process of evolution, the existence of anthropogenic climate change is accepted in the scientific community. Yet, within the public sphere, goaded by a sensationalizing mainstream media and politicians seeking re-election, these settled facts are made to appear tentative. Science is based on evidence, and if that evidence tells us something new we need to incorporate that into our policies. We cannot ignore it simply because it is unpopular or inconvenient.

By passionately advocating for evidence-based policy scientists will expand scientific research, reversing the trend of recent years; and by thus visibly working for the common weal scientists will earn the public's trust, protecting long-term investigations from short-sighted cuts. Scientific advancement is utterly dependent on public funding and public backing. The Space Race, the Human Genome Project, the search for the Higg's Boson and the Mars Curiosity Rover Mission were all enthusiastically embraced by the public. The progress of science demands that scientists engage the public. But for that to happen the notion that a scientist should stay hidden away in a laboratory needs to be retired.
 

laura_betzig's picture
Anthropologist; Historian

Years ago, when I sat at the feet of the master, the King of the Amazon Jungle liked to talk about culture. He quoted his own teachers, who considered it sui generis: culture was a thing in and of itself. It made us more than the sum of our biological parts; it emancipated us from the Promethean bonds of our evolutionary past. It set us apart from other animals, and made us special.

Napoleon Chagnon wasn't so sure about that, and neither was I.

What if the 100,000-odd year-old evidence of human social life—from the arrowheads in South Africa, to the Venus figurines at Dordogne—is the effect of nothing, more or less, but our efforts to become parents?  What if the 10,000-odd year-old record of civilization—from the tax accounts at temples in the Near East, to the inscription on a bronze statue in New York Harbor—is the product of nothing, more or less, but our struggle for genetic representation in future generations?

Either case can be made. For 100,000 years or more, prehistoric foragers probably lived like contemporary foragers in Africa, or Amazonia. They probably did their best to live in peace, but occasionally fought over the means of production and reproduction—so that the winners cohabited with more women, and supported more children. And they probably were more likely to fight where it was harder to flee—on territories where resources were easy to come by, and food and shelter on nearby territories were relatively scarce.

Then, within just the last 10,000 years, the first civilizations were built. From Mesopotamia to Egypt, from India to China, then in Greece and Rome, eusocial emperors—like eusocial insects—turned some of their subordinates into sterile castes, but were extraordinarily fertile themselves. A praepositus saacri cubiculi, or eunuch set over the sacred bedchamber, eventually ran the empire on the Tiber; and other eunuchs collected revenues, led armies, and kept track of the hundreds of "homeborn" children in the Familia Caesaris—the imperial family in Rome. Then the barbarians invaded, and the emperor took his slave harem off to a secure spot on the Bosporus.

And the Republic of St Peter took over in the depopulated west. From Clovis' kingdom in Paris, to Charlemagne's empire at Aachen, to the Holy Roman conglomerate east of the Rhine, cooperatively breeding aristocrats—like cooperatively breeding birds—turned some of their sons and daughters into celibates, but raised others to become husbands and wives. Abbesses, abbots and bishops administered estates and conscripted troops, or instructed their nieces and nephews in monastery schools; and their older brothers begot heirs to their enormous castles, or covered the countryside with bastards. Then the Crusaders took ships to the Near East, and Columbus led the first waves of immigrants across the Atlantic.

Over the next few centuries, hordes of poor, huddled masses from across the Old World found places to breathe free on the American Continents. Millions of solitary slaves and serfs, and thousands of unmarried priests and monks—like helper birds, or social insect workers, whose habitats had opened up—walked away from their lords and masters, and out of their cathedrals and abbeys. They were hoping to secure liberty for themselves and their posterity; they were looking places to raise their own families. In the Common Sense words of a common man, Tom Paine: "Freedom hath been hunted round the globe. Asia, and Africa, have long expelled her. —Europe regards her like a stranger, and England hath given her warning to depart. O! receive the fugitive, and prepare in time an asylum for mankind."

Since those early days, when I learned from Napoleon Chagnon, it's seemed to me that CULTURE is a 7-letter word for GOD. Good people—some of the best, and intelligent people—some of the smartest, have found meaning in religion: they have faith that something supernatural guides what we do. Other good, intelligent people have found meaning in culture: they believe that something superzoological shapes the course of human events. Their voices are often beautiful; and it's wonderful to be part of a chorus. But in the end, I don't get it. For me, the laws that apply to animals apply to us.

And in that view of life, there is grandeur enough.

gerd_gigerenzer's picture
Psychologist; Director, Harding Center for Risk Literacy, Max Planck Institute for Human Development; Author, How to Stay Smart in a Smart World

As a young man, Gottfried Wilhelm Leibniz had a beautiful dream: to discover the calculus that could map every single idea in the world into symbols. Such a universal calculus would put an end to all scholarly bickering—every passionate Edge discussion, for one, could be swiftly resolved by dispassionate calculation. Leibniz optimistically estimated that a few skilled persons should be able to work the whole thing out in five years. Yet nobody, Leibniz included, has yet found that holy grail.

Nonetheless, Leibniz's dream is alive and thriving in the social and neurosciences. Because the object of the dream has not been found, "ersatz objects" serve in its place. In some fields, it's multiple regression; in others, Bayesian statistics. But the champ is the null ritual:
 
1. Set up a null hypothesis of "no mean difference" or "zero correlation." Don't specify the predictions of your own research hypothesis.
 
2. Use 5 percent as a convention for rejecting the null. If significant, accept your research hypothesis. Report the result as p<.05, p<.01, or p.<.001, whichever comes next to the obtained p-value.
 
3. Always perform this procedure.
 
Not for a minute should anyone think that this procedure has much to do with statistics proper. Sir Ronald Fisher, to whom it has been wrongly attributed, in fact wrote that no researcher should use the same level of significance from experiment to experiment, while the eminent statisticians Jerzy Neyman & Egon Pearson would roll over in their graves if they knew about its current use. Bayesians too have always detested p-values. Yet open any journal in psychology, business, or neuroscience and you are likely to encounter page after page with p-values. To give just a few illustrations: In 2012, the average number of p-values in the Academy of Management Journal, the flagship empirical journal in its field, was 116 per article, ranging between 19 and 536! Typical of management, you might think. But if you take a look at all behavioral, neuropsychological and medical studies with humans published in 2011 in Nature, 89% of them reported p-values only—without even considering effect size, confidence interval, power, or model estimation.
 
A ritual is a collective or solemn ceremony consisting of actions performed to a prescribed order. It typically includes (i) sacred numbers or colors, (ii) delusions to avoid thinking about why one is performing the actions, and (iii) fear of being punished if one stops performing them. The null ritual contains all these features.
 
The number "5 percent" is held sacred, allegedly telling us the difference between a real effect and random noise. In fMRI studies, the numbers are replaced by colors, and the brain is said to light up.
 
The delusions are striking; if psychiatrists had any appreciation of statistics, they would have entered these aberrations into the DSM. Studies in the US, UK, and Germany showed that most researchers do not (or do not want to) understand what a p-value means. They confuse the p-value with the probability of a hypothesis, that is, p(Data|Ho) with p(Ho|Data), or with something else that wishful thinking desires, such as the probability that the data can be replicated. Startling errors are published in top journals. For instance, a most elementary point is that in order to investigate whether two means differ, one should test their difference. What should not be done is to test each mean against a common baseline, such as: "Neural activity increased with training (p < .05) but not in the control group (p > .05)." A 2011 paper in Nature Neuroscience presented an analysis of neuroscience articles in Science, Nature, Nature Neuroscience, Neuron and The Journal of Neuroscience showed that although 78 did as they should, 79 used the incorrect procedure.
 
Not performing the ritual can provoke great anxiety, even when it makes absolutely no sense. In one study (the authors' names are irrelevant), Internet participants were asked whether there is a difference between heroism and altruism. The far majority felt so: 2,347 respondents (97.5%) said yes, and 58 said no. What did the authors do with that information? They computed a chi-square test, calculated that c2(1) = 2178.60, p < .0001, and came to the astounding conclusion that there were indeed more people saying yes than no.
 
One manifestation of obsessive-compulsive disorder is the ritual of compulsive hand washing, even if there is no reason to do so. Likewise, researchers adhering to the null ritual perform statistical inferences all the time, even in situations where there is no point: that is, when no random sample was taken from a population, or no population was defined in the first place. In those cases, the statistical model of repeated random sampling from a population does not even apply, and good descriptive statistics is called for. So even if a significant p-value has been happily calculated, it's not clear what population is meant. The problem is not statistics, but its mistaken use as an automatic inference machine.
 
Finally, just as compulsive worrying and hand washing can interfere with the quality of life, the craving for significant p-values can undermine the quality of research. Which it has: Finding significant theories has been largely replaced by finding significant p-values. This surrogate goal encourages questionable research practices such as selectively reporting studies and conditions that "worked", or excluding data after looking at their impact on the results. According to a 2012 survey in Psychological Science of some 2,000 psychologists, over 90% admitted to having engaged in at least one of these or other questionable research practices. This massive borderline cheating in order to produce significant p-values is likely more harmful to progress than the rare cases of outright fraud. One harmful outcome is a flood of published but irreproducible results. Genetic and medical research using big data has encountered similar surprises when trying in vain to replicate published findings.
 
I do not mean to throw out the baby with the bathwater and get rid of statistics, which offers a highly useful toolbox for researchers. But it is time to get rid of statistical rituals that nurture automatic and mindless inferences.
 
Scientists should study rituals, not perform rituals themselves.
paul_bloom's picture
Brooks and Suzanne Ragen Professor of Psychology and Cognitive Science, Yale University; Author, Against Empathy

Psychologists have made striking discoveries about what makes people happy. Some of these findings clash with common sense. It turns out, for instance, that we are much better than we think we are at rebounding from negative experiences—we are usually blind to the workings of what Daniel Gilbert calls our "psychological immune system". Other discoveries mesh with what our grandmothers could have told us, such as the happiness boost from being with friends and the misery that often comes from solitude. Better to live as Donald Duck than as Scrooge McDuck.

Some leading researchers believe that as this work proceeds, we will converge on a complete scientific solution as to how to maximize our happiness. I think this mistaken. Even assuming a perfectly objective definition of happiness—and putting aside the distinction between a happy life and a good life—the issue of how to construct a maximally happy life falls, at least in part, outside the domain of science.

To see why, consider a related question: How can we determine the happiest society? As Derek Parfit and others have pointed out, even if you can precisely measure the happiness of each individual, this remains a vexingly hard question. Should we choose the society with the highest total happiness? If so, then a trillion people living miserable lives (but not so miserable that they would rather be dead) will be "happier" than a billion immensely happy people.

This seems wrong. Do we calculate averages? If so, then a society with a majority of extremely happy individuals and a small minority who are suffering terrible torment might be "happier" than a society where everyone is merely very happy. This seems wrong too. Or consider the contrast between (a) a society in which people are equally happy versus (b) a society with gross inequality— but which has both a larger total happiness and a larger average happiness than (a)? Which is happier? This is a hard problem, with real-world relevance, and it isn't the sort of problem that will be solved through the methods of science because science provides no empirical recipe for how overall happiness should be calculated.

Importantly, as Parfit notes, the same problems arise with regard to an individual life. How should one balance one's happiness across a lifetime? Which life is happier—one that is somewhat happy throughout or one that is a balance between joy and misery? Again, this isn't the sort of question that can be solved experimentally.

Then there are moral concerns. We are often faced with situations in which we have to choose whether to sacrifice our own happiness for the benefit of others. Most of us make such sacrifices for friends and families; some of us do so for strangers. Framed this way, it's a moral problem, not a hedonic one: a perfect hedonist would help others only to the extent that she believed it would increase her own happiness. But now consider that the same trade-offs apply for a single individual, within a single lifespan. Think of your happiness now and ask yourself how much you will give up, not for another person, but for yourself in the future. 

Life is full of such choices. When we indulge in certain immediate pleasures—fatty foods, unsafe sex, living like there's no tomorrow—we are greedily maxing out on our happiness now, at the expense of the happiness of our future selves. When we sacrifice for the future—unpleasant exercise, healthy and tasteless foods, saving for a rainy day—we are altruists, sacrificing now for the happiness of our future selves. Surprisingly, then, even the most selfish hedonist has to wrestle with moral questions, and seeming scientific questions about happiness quickly turn into manifestly non-scientific questions about the right thing to do.

 

kurt_gray's picture
Associate Professor of Psychology, University of North Carolina, Chapel Hill; Co-author (with Daniel Wegner), The Mind Club

"I was much struck how entirely vague and arbitrary is the distinction between species and varieties." Darwin (1859)

For centuries, there was one way to bring order to the vastness of biological diversity—Linnaean classification. In the 18th century, Carl Linnaeus devised a method for dividing species based upon their description—do they look the same, do they behave the same? With Linnaean classification, you could divide the natural world into discrete kinds—you could count them, saying confidently "there are two species of elephants" or "there are four kinds of bears." Some psychologists seek to bring the same order to the mind, claiming that "there are six emotions," "there are five types of personality," or "there are three moral concerns." These psychologists are inspired by the precision, order and neatness of Linnaeus' ideas—the only problem is that Linnaeus was wrong.

Linnaeus lived about a hundred years before Darwin introduced the theory of evolution, and long believed that species were fixed and unchangeable. His religious roots led him to see species as a product of divine providence and his job was simply to catalog these distinct kinds, once writing "God created, Linnaeus ordered." If God created a certain number of distinct species, cataloging and counting them made sense. It was meaningful to ask "how many salamanders did God create?"

Evolution, however, destroyed the sanctity of species. Species were not created whole from The Beginning, but instead emerged over time through the repetition of a simple algorithm: heredity, mutation and selection. Evolution showed that a dizzying diversity of life—from viruses, to cacti, to humans—were explained through a basic set of common processes expressed in different environments. This common process means that lines between species are more in the mind of humans than in nature, with many intermediate animals (e.g., lungfish) and hybrids (e.g., ligers) that defy easy categorization. Moreover, in geological time, these divisions are even more arbitrary, with species diverging and converging as continents separate and collide.

Biology has all but realized that species are not reflections of eternal Divine Order, but simply a useful way to intuitively organize the world. Unfortunately, psychology lags behind. Many psychologists believe that the mental world is fixed and countable, that the appearance of mental states reflects a deeper essence. Introductory psychology textbooks contain numbered lists of psychological species—5 kinds of human needs, 6 basic emotions, 3 moral concerns, 3 kinds of love, 3 parts of the mind—with these lists depending primarily upon the intuitions of those who are doing the counting.

Like Linnaeus in the 18th century, these intuitive taxonomies were once the best we could do because psychology lacked an understanding of basic psychological process. However, social cognition and neuroscience has revealed these processes, and found that diverse mental experiences—from emotion to morality to motivation—are combinations of more basic affective and cognitive processes. This research suggests that psychological states are not firmly demarcated "things" with enduring essences, but are instead fuzzy constructs that emerge from common psychological processes expressed across different environments.

Just as evolution can create infinite species by expressing a common process in specific environments, so too can the mind create infinite mental species. One can no sooner count emotions or moral concerns than snowflakes or colors. To be sure, there are descriptive similarities and differences across instances, but any groupings are arbitrary and rest heavily on the intuition of researchers. This is why scientists can never agree on the fundamental number of anything; one scientist may divide a mental experience into 3, another 4, and another 5.

It is time for psychology to abandon the enterprise of numbering nature, and recognize that psychological species are neither distinct nor real. Biology has long recognized the arbitrary and constructed natured of species; why are we more than 200 years behind? The likely answer is that people—even including psychologists and philosophers—believe that intuitions, as products of the mind, are accurate reflections of its structure. Unfortunately, decades of research demonstrates the flaws of intuitive realism, revealing that intuitions about the mind are poor guides to underlying psychological processes.

Psychologists must move from counting to combining. Counting is simply describing the world; one psychologist's intuitive ordering of mental states in one culture, at one time. Combining seeks to find basic psychological elements and discover how they interact to create the mental world. In biology, counting asks "how many salamanders are there?" whereas combining asks "what processes lead to salamander diversity." Counting is bound to a specific environment and time, whereas combining recognizes these factors as processes themselves. Psychology must follow biology and move from numbering individual species to exploring underlying systems.

This process has already begun. Thomas Insel, the head of NIMH has prioritized systems over species in psychopathology research. He rejects the utility of the DSM, suggesting that intuitive taxonomies obscure underling process of psychopathology, and impedes the discovery of treatments. NIMH funds proposals that examine the underlying affective, conceptual and neurological systems, which may explain why the "distinct" disorders of depression and anxiety are so often comorbid, and why serotonin reuptake inhibitors (SSRIs) appear to help diverse disorders. Psychopathology does not easily fit into categories, and neither do other psychological phenomena.

Of course, we shouldn't throw out the baby with the bathwater. It is still necessary to catalog the natural world to allow meaningful discussion. Even in biology, where the power of the process of evolution is undisputed, most acknowledge the utility of Linnaeus' system and continue to use the names he provided years ago. But he key is not to confuse human constructions with natural order; what is useful to humans is not necessarily true of nature. Intuitive taxonomies are a necessary first step in psychological science but even Linnaeus—as he learned more about the world—recognized the arbitrariness of his system and the species he labeled. It is time for psychology to recognize this fact as well, and leave behind Linnaeus and the 18th century.

daniel_goleman's picture
Psychologist; Author (with Richard Davidson), Altered Traits

Buy potato chips in London and a number on the bag will tell you its carbon footprint equals 75 grams of carbon emissions. That label serves two excellent functions: it renders transparent the ecological impact of those chips, and lowers the cognitive cost to zero of learning that impact.

 Such carbon footprint ratings, in theory, allow shoppers to favor products with better impacts, and companies to do the same with their operations. Well and good. Except the footprint concept, intended to mobilize the mass changes we need, ignores fundamentals of human motivation, tending to stifle change, not encourage it.

It's time we moved beyond talking about "carbon footprints", replacing the concept with a more precise measure of all the negative impacts of a given human activity on planetary systems for sustaining life. And while we're at it, let's go easy on the very idea of any kind of 'footprints'—the numbers are demoralizing. There's a more motivating replacement waiting in the wings: Handprints.

First, the expanded footprint. While the dialogue on global warming and its remedies focuses tightly on the carbon impact of our activities and energy systems—as measured by their carbon footprint—this very focus skews the conversation.

Technically a carbon footprint represents the total global warming impact of greenhouse gas emissions from a given activity, system or product. While carbon dioxide is the poster child for greenhouse gases, other such gases include methane, nitrous oxide and ozone (not to mention vaporized water or the condensed form, clouds). To create a standardized unit for greenhouse gas impacts all these varieties of emissions are converted into a carbon dioxide equivalent.

Reasonable, but this doesn't go far enough: why stop with carbon? There are several planet-wide systems that maintain life; climate change is but one of myriad ways human activity harms the planet. There's ecosystem destruction, dead lakes and ocean from acidification, loss of biodiversity, nitrogen and phosphorous cycles, dangers from particulate load in air, water and soil, pollution from man-made chemicals and more.

All these problems arise because virtually all human systems for energy, transportation, construction, industry and commerce are built on platforms that degrade those global systems. Calculating the overall ecological footprint of a given activity give us a more fine-tuned metric for the rate at which we are depleting all the global systems that sustain life on the planet—not just the carbon cycle.

Such metrics emerged from the relatively new science of industrial ecology, an amalgam of hard sciences like physics, chemistry and biology, with practical applications like industrial engineering and industrial design. This eco-math helps us perceive impacts we are otherwise oblivious to. For instance, when industrial ecologists measure how much of the carbon footprint you remediate when you recycle the plastic container for a yogurt, the result is about five percent of the yogurt's carbon footprint. Most of the yogurt's carbon footprint results from the methane emitted by digesting cattle, not from the plastic container.

 Then there's the motivational problem. Evolution shaped the human brain to help our ancestors survive in an era when the salient threats were predators. Our perceptual system was not tuned to the macro and micro changes that signal threats to the planetary support system. When it comes to these threats we suffer from system blindness.

While footprints offer a cognitive workaround that can help us make decisions that favor the planet, they too often have an unfortunate psychological effect: Knowing the planetary damage we do can be depressing and demotivating. Negative messaging like this, research from fields like public health finds, leads many or most people to tune out. Better to give us something positive we can do than to shame or scare us.

 Enter the "Handprint," the sum total of all the ways we lower our footprint. To calculate a handprint, take the footprint as the baseline, and then go a step further: assess the amount ameliorated by the good things we do: recycle, reuse, bike not drive. Convince other people to do likewise. Or invent a replacement for a high-footprint technology, like the sytrofoam subsititute made from rice hulls and mycelium rather than petroleum.

The handprint calculation applies the same methodology as for footprints, but reframes the total as a positive value: Keep growing your handprint and you are steadily reducing your negative impacts on the planet. Make your handprint bigger than your footprint and you are sustaining the planet, not damaging it.

And such a positive spin, motivational research tells us, will be more likely to keep people moving toward the target.

susan_blackmore's picture
Psychologist; Visiting Professor, University of Plymouth; Author, Consciousness: An Introduction

Consciousness is a hot topic in neuroscience and some of the brightest researchers are hunting for the neural correlates of consciousness (NCCs)—but they will never find them. The implicit theory of consciousness underlying this quest is misguided and needs to be retired.

The idea of the NCCs is simple enough and intuitively tempting. If we believe in the 'hard problem of consciousness'—the mystery of how subjective experience arises from (or is created by or generated by) objective events in a brain—then it's easy to imagine that there must be a special place in the brain where this happens. Or if there is no special place then some kind of 'consciousness neuron', or process or pattern or series of connections. We may not have the first clue how any of these objective things could produce subjective experience but if we could identify which of them was responsible (so the thinking goes), then we would be one step closer to solving the mystery.

This sounds eminently sensible as it means taking the well-worn scientific route of starting with correlations before moving on to causal explanations. The trouble is it depends on a dualist—and ultimately unworkable—theory of consciousness. The underlying intuition is that consciousness is an added extra—something additional to and different from the physical processes on which it depends. Searching for the NCCs relies on this difference. On one side of the correlation you measure neural processes using EEG, fMRI or other kinds of brain scan; on the other you measure subjective experiences or 'consciousness itself'. But how?

A popular method is to use binocular rivalry or ambiguous figures which can be seen in either of two incompatible ways, such as a Necker cube that flips between two orientations. To find the NCCs you find out which version is being consciously perceived as the perception flips from one to the other and then correlate that with what is happening in the visual system. The problem is that the person has to tell you in words 'Now I am conscious of this', or 'Now I'm now conscious of that'. They might instead press a lever or button, and other animals can do this too, but in every case you are measuring physical responses.

Is this capturing something called consciousness? Will it help us solve the mystery? No.

This method is really no different from any other correlational studies of brain function, such as correlating activity in the fusiform face area with seeing faces, or prefrontal cortex with certain kinds of decision-making. It correlates one type of physical measure with another. This is not useless research. It is very interesting to know, for example, where in the visual system neural activity changes when the reported visual experience flips. But discovering this does not tell us that this neural activity is the generator of something special called 'consciousness' or 'subjective experience' while everything else going on in the brain is 'unconscious'.

I can understand the temptation to think it is. Dualist thinking comes so naturally to us. We feel as though our conscious experiences are of a different order from the physical world. But this is the same intuition that leads to the hard problem seeming hard. It is the same intuition that produces the philosopher's zombie—a creature that is identical to me in every way except that it has no consciousness. It is the same intuition that leads people to write, apparently unproblematically, about brain processes being either conscious or unconscious.

Am I really denying this difference? Yes. Intuitively plausible as it is, this is a magic difference. Consciousness is not some weird and wonderful product of some brain processes but not others. Rather, it is an illusion constructed by a clever brain and body in a complex social world. We can speak, think, refer to ourselves as agents and so build up the false idea of a persisting self that has consciousness and free will.

We are tricked by an odd feature of consciousness. When I ask myself 'what am I conscious of now?' I can always find an answer. It's the trees outside the window, the sound of the wind, the problem I am worried about and cannot solve—or whatever seems most vivid at the time. This is what I mean by being conscious now, by having qualia. But what was happening a moment before I asked? When I look back I can use memories to claim that I was conscious of this or that and not conscious of something else, relying on the clarity, logic, consistency and other such features to decide.

This leads all too easily to the idea that while someone is awake they must always be conscious of something or other. And that leads along the slippery path to the idea that if we knew what to look for we could peer inside someone's brain and find out which processes were the conscious ones and which the unconscious ones. But this is all nonsense. All we will ever find is the neural correlates of thoughts, perceptions, memories and the verbal and attentional processes that lead us to think we are conscious.

When we finally have a better theory of consciousness to replace these popular delusions we will see that there is no hard problem, no magic difference and no NCCs. 

alun_anderson's picture
Senior Consultant (and former Editor-in-Chief and Publishing Director), New Scientist; Author, After the Ice

Back in the 1970s, the Nobel-prize winning ethologist Niko Tinbergen liked to trace out a graph; one line on it rose slowly over time, showing the rate of our genetic evolution, a second curved steeply upwards showing the rate at which he saw our culture changing. He would speculate whether the gap between the environment we had evolved in and the one in which we now found ourselves might be the root of a number of ills. Since then, such ideas have spread, in part because of the rise of evolutionary psychology.

In its strong form, evolutionary psychology holds that the human mind is like a Swiss Army knife, made up of many innate special-purpose modules, each shaped by natural selection to solve problems encountered during Homo's long pre-civilization life. With ninety-nine per cent of our evolutionary past spent as hunter-gatherers, it seems reasonable that modules which were adaptive in past circumstances still dominate our thinking. Thus women will naturally find athletic men—the kind who would be good hunters—to be especially attractive; if we had instead spent the Pleistocene delving the earth like Tolkien's dwarves then short, barrel-chested men would now appeal. In the popular imagination, evolutionary psychology has cast us as Stone Age thinkers in modern times, our brains not wired to cope with offices, schools, courts, writing and new technology.

It's a beguiling idea, suggesting that somewhere out there is a more natural world in which we would feel truly at home. But there is little evidence for the idea or that the whole of our psychology is shaped so rigidly by our Pleistocene past. It is time for it to retire and for us to think more widely.

New ideas and data from the cognitive sciences, comparative animal behavior and evolutionary developmental biology suggest we should not compartmentalize culture and human nature so sharply. Rather, culture and social processes shape brains that in turn shape culture and are transmitted onwards.

Reading provides a nice example. The ability to pass on and accumulate information has transformed our world, but written languages appeared only in the past 5,000 years ago, not long enough for us to have evolved an innate "reading module". Still, if you look inside the brain of a literate person, it will light up quite differently from that of an illiterate one, not just when reading but also when listening to spoken words. During the social process of being taught to read, infant brains are remodeled and new pathways created. If we didn't know this cognitive capacity was produced by social learning we'd likely think of it as a genetically-inherited system. But it is not: our brain and minds can be transformed through the acquisition of cognitive tools which we are then able to pass on again and again.

Of course, it is reasonable to assume that those cognitive tools have to fit nicely with how our brain works, just as a physical tool has to fit well in our hands. But as a species we seem to possess remarkable powers to keep building and rebuilding our cognitive tool kit through interaction with others. It is surprising how similar humans and chimpanzees are when they are infants—in skills like numeracy and behavior reading—and yet so different when they are adult. Beyond a certain age, humans are propelled along a different developmental trajectory, in part because they are immensely socially motivated to interact with others, which chimpanzees are not. Evolutionary developmental psychology has thus become a hot research topic, as it will hold the key to the way social processes unfold minds.

Culture and the social world shape our brains and give us new cognitive capacities that we can pass along, evolving culture as we go. We shouldn't think of the cultural world as separate and estranged from our biological selves, but something that shapes us, and is in turn transmitted by us. Such a view suggests that rather than being alienated hunter-gatherers lost in the modern world, we are in flux and still may have only a narrow conception of what humans could be.

 

marcelo_gleiser's picture
Appleton Professor of Natural Philosophy, Dartmouth College; Author, The Island of Knowledge

There! I said it! The venerable notion of Unification needs to go. I don't mean the smaller unifications that we scientists search for all the time, connecting as few principles with as many natural phenomena as possible. This sort of scientific economy is a major foundational stone for what we do: we search and we simplify. Over the centuries, scientists have done wonders following this motto. Newton's law of universal gravity, the laws of thermodynamics, electromagnetism, universal behavior in phase transitions…

The trouble starts when we take this idea too far and search for the über-unification, the theory of everything, the arch-reductionist notion that all forces of Nature are merely manifestations of a single force. This is the idea that needs to go. And I say this with a heavy heart, given that my early career aspirations and formative years were very much fueled by the impulse to unify it all.

The idea of unification is quite old, as old as Western philosophy. Thales, the first pre-Socratic philosopher, already posited that "all is water," thus dreaming up a single material principle to describe all of Nature. Plato proposed elusive geometrical forms as the archetypal structures behind all there is. Math became equated with beauty and beauty with truth. From there, the highest of post-Plato aspirations was to erect a purely mathematical explanation for all there is, the all-encompassing cosmic blueprint, the masterwork of a supreme intelligence. Needless to say, the whole thing was always about our intelligence, even if often blamed on some foggy "Mind of God" metaphor.

We explain the world the way we think about it. There is no way out of our minds.

The impulse to unify it all runs deep in the souls of mathematicians and theoretical physicists, from the Langlands program to superstring theory. But here is the rub: pure mathematics is not physics. The power of mathematics comes precisely from its detachment from physical reality. A mathematician can create any universe she wants, and play all sorts of games with it. A physicist can't, for his job is to describe Nature as we perceive it. Nevertheless, the unification game has been an integral part of physics since Galileo, and has produced what it should: approximate unifications. Yes, even the most sacred of our unifications are only approximations. Take, for example, electromagnetism. The equations describing electricity and magnetism are only perfectly symmetric in the absence of any sources of charge or magnetism, that is, in empty space. Or take the famous (and beautiful) Standard Model of particle physics, based on the "unification" of electromagnetism and the weak nuclear force. Here again, we don’t have a real unification since the theory retains two forces all along. (In more technical jargon, there are two coupling constants and two gauge groups.) A real unification, such as the conjectured Grand Unification between the strong, the weak, and the electromagnetic forces, proposed 40 years ago, remains unfulfilled.

So, what's going on? Why do so many insist in finding the One in Nature while Nature keeps telling us that it's really about the many?

For one thing, the scientific impulse to unify is cryptoreligious. The West has bathed in monotheism for thousands of years, and even in polytheistic cultures there is always an alpha-God in charge (Zeus, Ra, Para-Brahman…) For another, there is something deeply appealing in equating all of Nature to a single creative principle: to decipher the "mind of God" is to be special, is to answer to a higher calling. Pure mathematicians who believe in the reality of mathematical truths are monks of a secret order, open only to the initiated. In the case of high-energy physics, all unification theories rely on sophisticated mathematics related to pure geometric structures: the belief is that Nature's ultimate code exists in the ethereal world of mathematical truths and that we can decipher it.

Recent experimental data has been devastating to such belief. No trace of supersymmetric particles, of extra dimensions, or of dark matter of any sort, all long-awaited signatures of unification physics. Maybe something will come up: to find we must search. The trouble with unification in high-energy physics is that you can always push it beyond the experimental range. "The Large Hadron Collider got to 7 TeV and found nothing? No problem! Who said Nature should opt for the simplest versions of unification? Maybe it’s all happening at much higher energies, well beyond its reach."

There is nothing wrong with this kind of position. You can believe it until you die and die happy. Or you can conclude that what we do best is to construct approximate models of how Nature works and that the symmetries we find are only descriptions of what really goes on.

Perfection is too hard a burden to impose on Nature.

People often see this kind of argument as defeatist, or as coming from someone who got frustrated and gave up. (As in "he lost his faith.") Big mistake. To search for simplicity is essential to what scientists do. It's what I do. There are essential organizing principles in Nature, and the laws we find are excellent ways to describe them. But the laws are many, not one. We are successful pattern-seeking rational mammals. That, alone, is cause for celebration. However, let us not confuse our descriptions and models with reality. We may hold perfection in our mind's eye as a sort of ethereal muse. Meanwhile, Nature is out there, doing its thing. That we manage to catch a glimpse of its inner workings is nothing short of wonderful. And that should be good enough.

martin_nowak's picture
Professor of Biology and Mathematics, Harvard University; Co-author, SuperCooperators

This year marks the 50th anniversary of the introduction of inclusive fitness, the highly influential idea which supposedly explains how insects evolve complex societies, and how natural selection can lead to altruism among relatives.

This mainstay of sociobiology is based on the 1964 work of the English evolutionary biologist, William Hamilton, who coined the following definition:

Inclusive fitness may be imagined as the personal fitness which an individual actually expresses in its production of adult offspring as it becomes after it has been first stripped and then augmented in a certain way. It is stripped of all components which can be considered as due to the individual’s social environment, leaving the fitness which he would express if not exposed to any of the harms or benefits of that environment. This quantity is then augmented by certain fractions of the quantities of harm and benefit which the individual himself causes to the fitnesses of his neighbours. The fractions in question are simply the coefficients of relationship appropriate to the neighbours whom he affects: unity for clonal individuals, one-half for sibs, one-quarter for half-sibs, one-eighth for cousins,… and finally zero for all neighbours whose relationship can be considered negligibly small. 

Modern formulations of inclusive fitness theory use different relatedness coefficients but all other aspects of Hamilton's definition remain intact.

Leaving aside the inelegance of Hamilton's original formulation, there is a basic problem with inclusive fitness: you can prove mathematically that inclusive fitness does not apply to the vast majority of evolutionary processes. The reason is simple. Fitness effects cannot in general be written as the sum of components caused by pairwise interactions. This loss of additivity typically occurs when the outcome of a social interaction depends on the strategies of more than one individual. All mathematically meaningful approaches to inclusive fitness realize these limitations. Thus, inclusive fitness becomes a very particular way to calculate evolution: it works in some cases, but not in general. Moreover, if an inclusive fitness calculation can be performed, it gives the same answer as a standard calculation of fitness and natural selection. The latter approach is usually simple and direct.

These mathematical facts make uncomfortable reading for overly enthusiastic proponents of inclusive fitness. In the most extreme cases, they come over as followers of a cult who believe that inclusive fitness is an important extension of the theory of evolution and "always true." In order to maintain the idea that inclusive fitness can always be calculated, a method has been devised that casts any evolutionary change in terms of virtual cost and benefit parameters, which appear as regression coefficients in a statistical analysis. The problem with adopting this statistical approach is that the resulting cost and benefit parameters are meaningless quantities in the sense that they do not explain what is going on in a theoretical model or in empirical data.

Why do we have inclusive fitness? Hamilton's original goal was to find a quantity that is maximized by evolution. This view is attractive: winners of the evolutionary process should be individuals with the highest inclusive fitness. But such an attempt is very much in the spirit of the linear thinking of the 1960s before the likes of Robert May showed us how nonlinear phenomena apply to ecology, population genetics, and evolutionary game theory. From the 1970s onwards we actually understood that evolution does not permit a single quantity that is always maximized. This fact still has to sink in with many in the inclusive fitness community.

What shall we use instead of inclusive fitness? Inclusive fitness seeks to explain social evolution on the level of the individual. For most evolutionary processes, however, the individual is the wrong unit of analysis, because the population structure is complicated and the same genes are present in different types of individuals. Therefore, we have to go to the level of genes. A straightforward approach is to calculate how natural selection changes the frequency of genetic mutations that affect social behavior. These calculations, which do not use inclusive fitness, can identify the key parameters that need to be measured to improve understanding. On the level of genes there is no inclusive fitness.

We have a strong and meaningful mathematical theory of evolution. Natural selection, mutation and population structure are concepts that can be clearly investigated with mathematical formalism. Everyone who understands the mathematical theory of evolution realizes that there is no problem that would require the calculation of inclusive fitness. Calculating inclusive fitness is an optional exercise, one that is best done when a problem is already completely understood. Then in some cases, inclusive fitness can be used to re-derive the same result.

To be fair, over the years inclusive fitness has stimulated much empirical and theoretical work, some of which has been useful. It has induced a discussion of cost, benefit and relatedness in sociobiology, which has some merit. But the dominant and unfortunate impact has been the suppression of meaningful mathematical theories in wide areas of sociobiology.

Contrary to what is often claimed there exists no empirical test of inclusive fitness theory; nobody has ever performed an actual inclusive fitness calculation for a real population. Inclusive fitness was originally understood as a crude heuristic that can guide intuition in some cases, but not in general. It is only in recent years that inclusive fitness has been elevated—mostly by mediocre theoreticians—to a religious belief, which is universal, unconstrained and always true. Understanding the limitations of inclusive fitness gives us now the opportunity to develop mathematical descriptions of key phenomena in social evolution. It is time to abandon inclusive fitness and focus on a meaningful interaction between theory and experiment in sociobiology.

david_deutsch's picture
Physicist, University of Oxford; Author, The Beginning of Infinity; Recipient, Edge Computation Science Prize

The term "quantum jump has entered everyday language as a metaphor for a large, discontinuous change. It has also become widespread in the vast but sadly repetitive landscape of pseudo-science and mysticism.

The term comes from physics, and is indeed used by physicists (though rarely in published papers). It evokes the fact that mutually distinguishable states in quantum physical systems are always discrete. Yet there is no such phenomenon in quantum physics as a "quantum jump": under the laws of quantum theory, change is always continuous in both space and time. OK, maybe some physicists still subscribe to an exception to that, namely the so-called "collapse of the wave function" when an object is observed by a conscious observer. But that nonsense is not the nonsense I am referring to here. I'm referring to misconceptions even about the sub-microscopic world—like: "when an electron in a higher-energy state undergoes a transition to a lower energy level, emitting a photon, it quantum-jumps from one discrete orbit to another without passing through intermediate states".

Even worse: "when an electron in a tunnel diode approaches the barrier that it does not have enough energy to penetrate (so that under classical physics it would bounce off), the quantum phenomenon of tunneling allows it to appear mysteriously on the other side without ever having been in the region where it would have negative kinetic energy".

The truth is that the electron in such situations does not have a single energy, or position, but a range of energies and positions, and the allowed range itself can change with time. If the whole range of energies of a tunneling particle were below that required to surmount the barrier, it would indeed bounce off. And if an electron in an atom really were at a discrete energy level, and nothing intervened to change that, then it would never make a transition to any other energy.

Quantum jumps are an instance of what used to be called "action at a distance": something at one location having an effect, not mediated by anything physical, at another location. Newton called this "so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it". And the error has analogues in fields quite distant from classical and quantum physics. For example in political philosophy the "quantum jump" is called revolution, and the absurd error is that progress can be made by violently sweeping away existing political institutions and starting from scratch. In the philosophy of science it is Thomas Kuhn's idea that science proceeds via revolutions—i.e. victories of one faction over another, both of which are unable to alter their respective "paradigms" rationally. In biology the "quantum jump" is called saltation: the appearance of a new adaptation from one generation to the next, and the absurd error is called saltationism.

Newton was wrong that there is a maximum size of error that competent people can fall into, but right that this particular one is severe. All those versions of it are mistaken for a single reason: they all require information of the requisite kind to appear from nowhere. In reality, the space on the far side of the barrier cannot "know" that an electron, and not a proton or a bison, must appear there, until some physical change, originating at the electron, reaches it. The same holds when it is not a spatial gap but a directly informational one: Political institutions, and biological adaptations, instantiate information—knowledge—about how a complex system can better meet the challenges facing it, and knowledge can be created only by processes of piecemeal variation and selection. And Kuhn's vision cannot explain how science has in fact been delivering knowledge about physical reality at an ever-accelerating rate.

Quantum jumps in all these fields represent a retreat from explanation, and therefore in effect an appeal to the supernatural. They all have the logic of the Sidney Harris cartoon "Then a Miracle Occurs" (depicting a mathematician with a gap in his proof). As Richard Dawkins puts it, "saltationism is creationism". And in all cases the reality that fills the gap, the idea that truly explains the phenomenon, is much more interesting and delightful than any faith in its mystery could be.

samuel_arbesman's picture
Complexity Scientist; Scientist in Residence at Lux Capital; Author, Overcomplicated

Centuries ago, when science was young, it was possible to make contributions to scientific knowledge through simple experiments. You could be a hobbyist or a "gentleman scientist" and discover something fundamental about the world around us.

But in the past several decades, science has gotten bigger. In this era of Big Science, we need large teams of scientists working together to make discoveries in everything from the life sciences to high-energy physics. And we need lots of money to do this. The era of the lone scientist doing small-scale science seems to be over.

And that is often the narrative we hear. When the Higgs boson was found, it wasn't discovered through an elegant experiment using an apparatus developed in a garage. It was found using a massive technological construction and thousands of scientists working together.

So is small-scale science over? While the trends clearly point to the advent of team science, small and clever science—the realm of the tiny budget or the elegant experiment, or sometimes even the hobbyist—is by no means over. To be clear, small science is not necessarily the lone underdog working against the establishment. More often it is simply one or two underfunded scientists doing their best. But it seems that they can still survive even in this modern era of big science. For example, several years ago, a paleontology graduate student made a discovery that cleared a dinosaur of cannibalism charges that began with a very simple observation: by looking at one of the fossil casts on the wall of the American Museum of Natural History's subway station. Or take the scientists who examined the space of possible ways to tie a necktie, and whose research was published in Nature. Little science is still possible.

Though these examples might sound somewhat trivial, in fact, small-scale science can also have a big impact. Peter Mitchell was awarded a Nobel Prize for his work in biochemistry conducted at his own small private research institute with only a handful of people. Support for this small lab included funds from his family's money—making Mitchell a modern-day equivalent of the gentleman scientist. Another Nobel Prize was awarded for work on "split brain" patients—those with the connection between their two hemispheres severed—that led to novel insights into the brain's function. Part of this work consisted of experiments that are so simple—though exceedingly clever—that the Nobel Prize website actually has a game just like the original experiments online, where you can play at home.

You can even still do science on the cheap. Several decades ago, Stanley Milgram measured the well-known Six Degrees of Separation using little more than postcards. While science has become bigger since then, in some ways it has become even easier to conduct large-scale science by the scientist who operates at a small scale: due to massive computational advances and widespread data freely available (not to mention easier data collection online), now any scientist can do big science cheaply and in a small and easy way. Technology has allowed research scientists to leverage a tiny budget in astonishing ways. And each of us can now even easily contribute to science as an amateur, through the growing prevalence of citizen science, where the general public can help—often in a small incremental way—to collect data or otherwise help with science. From categorizing galaxies and plankton to figuring out how proteins fold, everyone can now be a part of the scientific process.

And while mathematics might still be the domain of the singular genius, even it has a place for the hobbyist or the amateur. For example, in the mid-1990s, two high school students discovered a novel additional solution to a problem that Euclid posed and solved thousands of years ago, and for which no other method had been found since that time millennia ago. And there is even an entire domain known as recreational mathematics.

Some of these examples might seem to be the rare exceptions that prove the rule of Big Science, but I think they demonstrate something far more optimistic: that small science can flourish, even with all of the trends that show science is getting bigger and bigger. Creative experiments and the right questions are just as important as ample funding and infrastructure, and technology is making this work easier than ever. Little science can still prosper.

gregory_benford's picture
Emeritus Professor of Physics and Astronomy, UC-Irvine; Novelist, The Berlin Project

Many believe this seeming axiom, that beauty leads to descriptive power. Our experience seems to show this, mostly from the successes of physics. There is some truth to it, but also some illusion.

There is a ready explanation of how a distant primate came into the beginnings of a mathematical appreciation of nature. Hunting, that primate found it easier to fling rocks or spears at fleeing prey than chase them down. Some of his fellows found the curve of a flung stone difficult to achieve, but he did not. He found the parabola beautiful and simpler to achieve, because that pleasurable sensation provided evolutionary feedback. Over eons this led to an animal that invented complex geometries, calculus and beyond.

This is a huge leap, of course, an evolutionary overshoot. We seem to be smarter than needed simply to survive in the natural world—earlier hominids did, even spreading over most of the planet. We did go through some population bottlenecks in our past, perhaps as recent as about 130,000 years ago. Perhaps those recent eras of intense selection explain why we have such vastly disproportionate mental abilities.

Still there remain, beyond evolutionary arguments, two mysteries in math: whence its amazing ability to describe nature, and why its intrinsic beauty and elegance?

Parabolas are elegant, true. They describe how hard bodies fly through the air under gravity. But the motion of a falling leaf, on the other hand, demands several differential equations taking into account wind velocity, gravity, geometry of the leaf, fluid flow and much else. A cruising airplane is even harder to describe. Neither case is elegant or simple.

So the utility of math stands separately from its intrinsic beauty. Mathematics is most elegant when we simplify the system considered. So with a baseball we account for the initial acceleration and angle, the air and gravity, and out comes a parabola as a good approximation. Not so the leaf.
And that parabola? We see its simple beauty far too slowly to be of any use in real time. Our appreciation comes afterward. To actually make a parabola work for us in baseball, we learn how to throw. Such learning builds on hard-wired neuronal networks in the brain, selected for over evolutionary times, since knowing how to throw a missile is adaptive. A human pitcher can more subtly affect the trajectory by throwing curves, knuckle balls etc. Those are certainly more complex trajectories and probably less elegant, but still well within the capability of our nervous systems. But for well-learned actions, all that processing goes on at unconscious levels. In fact, too much conscious attention to the details of action can interfere. Athletes know this—it's the art of staying in the zone. Probably that zone is where the mind runs on its sense of rightness, beauty, economy of effort.

Further, elegance is hard to define, as are most aesthetic judgments. Richard Feynman once noted that it is simple to make known laws more elegant, say by starting with Newton's force law, F=ma, then defining R=F – ma. The equation R=0 is visually more elegant, but contains no more information. The Lagrangian method in dynamics is elegant—just write the expression for kinetic energy minus the potential energy—but one must know a fundamental theory to do so; the elegance of the Lagrangian comes later, as a mathematical aid.

More recently, it is hard to devise an elegant cosmological theory that yields directly the small cosmological constant we observe. Some solve this problem by invoking the Anthropic Principle, and thus multiverses of some sort. But this ventures near a violation of another form of the elegance standard, Occam's Razor. Imagining a vast sea of multiverses, with us arising in one where conditions produce intelligent beings, seems to many excessive. It invokes a plentitude we can never see. The scientific test of multiverse cosmology is whether it leads to predictable consequences.

Can multiverses converse with each other? That would be a way of verifying the basis of such theories. Most multiverse models seem to say there is no possible communication between the infinitude of multiverses. Brane theory, though, comes from models where no force law operates between branes, except gravitation. Perhaps someday an instrument like LIGO, the Laser Interferometer Gravitational-Wave Observatory, can detect such waves from branes. But is it elegant to shift confirmation onto some far future technology? Sweeping dust under a rug seems inelegant to me.

Evolution doesn't care about beauty and elegance, just utility. Beauty does play a secondary role, though. The male who best throws the spear to bring down prey is appreciated and may have a choice of many mates. It just so happens that the effective and now beautiful act of spear throwing is describable with fairly simple math.
We make the short step to say the underlying math is also beautiful.

Math's utility implies that for a suitably simple model of the universe there should be a fairly simple mathematical theory of everything, something like general relativity, describable by a one-line equation. Searching for it on that intuitive basis may lead us to such a theory. I suspect a model that captures the full complexity of the universe, though, would take up a lot more than one line.

When we say a math model is elegant and beautiful, we express the limits of our own minds. It is not a deep description of the world. In the end, simple models are much easier to comprehend than complex ones. We cannot expect that the path of elegance will always guarantee we are on the right track.

donald_d_hoffman's picture
Cognitive Scientist, UC, Irvine; Author, The Case Against Reality

Those of our predecessors who perceived the world more accurately enjoyed a competitive advantage over their less-fortunate peers. They were thus more likely to raise children and to become our ancestors. We are the offspring of those who perceived more truly, and we can be confident that our perceptions are, in the normal case, reasonably accurate. There are of course endogenous limits. We can, for instance, see light only in a narrow window of wavelengths between roughly 400 and 700 nanometers, and hear sound only in a narrow window of frequencies between 20 and 20,000 Hertz. Moreover we are prone, on occasion, to have perceptual illusions. But with these provisos noted, it is fair to conclude on evolutionary grounds that our perceptions are, in general, reliable guides to reality.

This is the consensus of researchers studying perception via brain imaging, computational modeling and psychophysical experiments. It is mentioned in passing in many professional publications, and stated as fact in standard textbooks.

But it gets evolution wrong. Fitness and truth are distinct concepts in evolutionary theory. To specify a fitness function one must specify not just the state of the world but also, inter alia, a particular organism, a particular state of that organism, and a particular action. Dark chocolates can kill cats, but are a fitting gift from a suitor on Valentine's Day. 

Monte Carlo simulations using evolutionary game theory, with a wide range of fitness functions and a wide range of randomly created environments, find that truer perceptions are routinely driven to extinction by perceptions that are tuned to the relevant fitness functions. The extension of these simulations to evolutionary graphs is in progress, and the same result is expected. Simulations with genetic algorithms find that truth never gets on the stage to have a chance to go extinct.

Perceptions tuned to fitness are typically far less complex than those tuned to truth. They require less time and resources to compute, and are thus advantageous in environments where swift action is critical. But even apart from considerations of time and complexity, true perceptions go extinct simply because natural selection selects for fitness not truth. 

We must take our perceptions seriously. They have been shaped by natural selection to guide adaptive behaviors and to keep us alive long enough to reproduce. We should avoid cliffs and snakes. But we must not take our perceptions literally. They are not the truth; they are simply a species-specific guide to behavior. 

Observation is the empirical foundation of science. The predicates of this foundation, including space, time, physical objects and causality, are a species-specific adaptation, not an insight. Thus this view of perception has implications for fields beyond perceptual science, including physics, neuroscience and the philosophy of science. The old assumption that fitter perceptions are truer perceptions is deeply woven into our conception of science. The funeral of this assumption will not be snubbed with a back-page obituary, but heralded with regime change.

 

seth_lloyd's picture
Professor of Quantum Mechanical Engineering, MIT; Author, Programming the Universe

I know. The universe has been around for 13.8 billion years and is likely to survive for another hundred billion years or more. Plus, where would the universe retire to? Florida isn't big enough. But it is time to retire the twenty-five hundred year old scientific idea of the universe as the single volume of space and time that contains everything. Twenty-first century cosmology strongly suggests that what we see in the cosmos—stars, galaxies, space and time since the big bang—does not encompass all of reality. Cosmos, buy the condo.

What is the universe, anyway? To test your knowledge of the universe, please complete the following sentence. The universe

(a) consists of all things visible and invisible—what is, has been, and will be.

(b) began 13.8 billion years ago in a giant explosion called the big bang, and encompasses all planets, stars, galaxies, space and time.

(c) was licked out of the salty rim of the primordial fiery pit by the tongue of a giant cow.

(d) All of the above.

(Correct answer below.)

The idea of the universe as an observed and measured thing has persisted for thousands of years. Those observations and measurements have been so successful that today we know more about the origin of the universe than we do about the origin of life on earth. But the success of observational cosmology has brought us to a point where it is no longer possible to identify the universe—in the sense of answer (a) above—with the observed cosmos—answer (b). The same observations that establish the detailed history of the universe imply that the observed cosmos is a vanishingly small fraction of an infinite universe. The finite amount of time since the big bang means that our observations only extend a little more than ten billion light years from earth. Beyond the horizon of our observation lies more of the same, space filled with galaxies stretching on forever. No matter how long the universe exists, we will have access to only a finite part, while an infinite amount of universe remains beyond our grasp. All but an infinitesimal fraction of the universe is unknowable.

That's a blow. The scientific concept, universe = observable universe, has thrown in the towel. Perhaps that's OK. What's not to like about a universe that encompasses infinite unknowable space? But the hits keep coming. As cosmologists delve deeper into the past, they find more and more clues that, for better or worse, there is more out there than just the infinite space beyond our horizon. Extrapolating backwards before the big bang, cosmologists have identified an epoch called inflation, in which the universe doubled in size many times over a tiny fraction of a second. The vast majority of spacetime consists of this rapidly expanding stuff. Our own universe, infinite as it is, is just a 'bubble' that has nucleated in this inflationary sea.

It gets worse. The inflationary sea contains an infinity of other bubbles, each an infinite universe in its own right. In different bubbles the laws of physics can take different forms. Somewhere out there in another bubble universe, the electron has a different mass. In another bubble, electrons don't exist. Because it consists not of one cosmos but of many, the multi-bubble universe is often called a multiverse. The promiscuous nature of the multiverse may be unappealing (William James, who coined the word, called the multiverse a 'harlot'), but it is hard to eliminate. As a final insult to unity, the laws of quantum mechanics indicate that the universe is continually splitting into multiple histories or 'worlds,' out of which the world that we experience is only one. The other worlds contain the events that didn’t happen in our world.

After a two millenium run, the universe as observable cosmos is kaput. Beyond what we can see, an infinite array of galaxies exists. Beyond that infinite array, an infinite number of bubble universes bounce and pop in the inflationary sea. Closer by, but utterly inaccessible, the many worlds of quantum mechanics branch and propagate. MIT cosmologist Max Tegmark calls these three kinds of proliferating realities the type I, type II, and type III multiverses. Where will it all end? Somehow, a single, accessible universe seemed more dignified.

There is hope, however. Multiplicity itself represents a kind of unity. We now know that the universe contains more things than we can ever see, hear or touch. Rather than regarding the multiplicity of physical realities as a problem, let's take it as an opportunity.

Suppose that everything that could exist, does exist. The multiverse is not a bug, but a feature. We have to be careful: the set of everything that could exist belongs to the realm of metaphysics rather than of physics. Tegmark and I have shown that with a minor restriction, however, we can pull back from the metaphysical edge. Suppose that the physical universe contains all things that are locally finite, in the sense that any finite piece of the thing can be described by a finite amount of information. The set of locally finite things is mathematically well-defined: it consists of things whose behavior can be simulated on a computer (more specifically, on a quantum computer). Because they are locally finite, the universe that we observe and the various multiverses are all contained within this computational universe. As is, somewhere, a giant cow.

Answer to quiz: (c)

nicholas_g_carr's picture
Author, Utopia is Creepy

We live anecdotally, proceeding from birth to death through a series of incidents, but scientists can be quick to dismiss the value of anecdotes. "Anecdotal" has become something of a curse word, at least when applied to research and other explorations of the real. A personal story, in this view, is a distraction or a distortion, something that gets in the way of a broader, statistically rigorous analysis of a large set of observations or a big pile of data. But as this year's Edge question makes clear, the line between the objective and the subjective falls short of the Euclidean ideal. It's negotiable. The empirical, if it's to provide anything like a full picture, needs to make room for both the statistical and the anecdotal.

The danger in scorning the anecdotal is that science gets too far removed from the actual experience of life, that it loses sight of the fact that mathematical averages and other such measures are always abstractions. Some prominent physicists have recently questioned the need for philosophy, implying that it has been rendered obsolete by scientific inquiry. I wonder if that opinion isn't a symptom of anti-anecdotalism. Philosophers, poets, artists: their raw material includes the anecdote, and they remain, even more so than scientists, our best guides to what it means to exist.
 

leo_m_chalupa's picture
Neurobiologist; Professor of Pharmacology and Physiology, George Washington University

Brain plasticity refers to the fact that neurons are capable of changing their structural and functional properties with experience. That seems hardly surprising since every part of the body changes with age. What is special about brain plasticity (but not unique to this organ) is that the changes are mediated by specific events that are in some sense adaptive. The field of brain plasticity primarily derives from the pioneering studies of Torsten Wiesel and David Hubel who showed that depriving one eye of normal visual input during early development resulted in a loss of functional connections of that eye with the visual cortex, while the connections of the eye not deprived of visual input expanded.

These studies convincingly demonstrated that early brain connections are not hard-wired, but could be modified by early experience hence they were plastic. For this work, and related studies, done in the 1960's Wiesel and Hubel received the Nobel Prize in 1981. Since that time there have been thousands of studies showing a wide diversity of neuronal changes in virtually every region of the brain, ranging from molecular to the systems level, in young, adult and aged subjects. As a result, by the end of the 20th century our view of the brain evolved from the hard wired to the seemingly ever changeable. Today plasticity is one of the most commonly used words in the neuroscience literature. Indeed, I have employed this term many times in my own research articles and used it in the titles of some of my edited books. So what's wrong with that, you may ask?

For one thing, the widespread use of "brain plasticity" to virtually every type of change in neuronal structure and function has rendered this term largely meaningless. When virtually all and any change in neurons is characterized as plasticity, the term encompasses so much that it no longer conveys any useful information. It is also the case, that many studies invoke brain plasticity as the underlying cause of modified behavioral states without having any direct evidence for neuronal changes. Particularly egregious are the studies showing improvements in performance on some particular task with practice. The fact that practice improves performance has been noted before anything was known about the brain. Does it really add anything to invoke that improvements in function demonstrate a remarkable degree of brain plasticity. The word "remarkable" is often used to denote practice effects in seniors as if those old enough to receive social security are incapable of showing enhanced performance with training.

Studies of this type have lead to the launch of a growing brain training industry. Many of these programs are focused on the very young. Particularly popular in past years was the "Mozart effect" which led parents, who had no interest in classical music themselves, to play continuously pieces by Mozart to their infants. This movement seems to have abated, replaced by a plethora of games that are suppose to improve the brains of children of all ages. But the largest growth in the brain plasticity industry has focused on the aging brain. Given the concerns that most of us have about memory loss and decreasing cognitive abilities with age this is understandable. There are large profits to be made as evident by the number of companies that have proliferated in this sector in recent years.

There is of course nothing wrong with having children or seniors engage in activities that challenge their cognitive functions. In fact, there may be some genuine benefits in doing so. Certainly undergoing such training is preferable to watching television for many hours each day. It is also the case that any and all changes in performance reflect some underlying changes in the brain. How could it be otherwise, since the brain controls all behaviors? But as yet, we do not know what occurs in the brain when performance improves on a specific video game, nor do we understand how to make such changes long lasting and generalizable to diverse cognitive states.  Terming such efforts brain training or enhanced brain plasticity is often just hype intended to sell a product. This does not mean that the so-called brain exercises should be abandoned. They are unlikely to cause harm and may even do some good. But please refrain from invoking brain plasticity, remarkable or otherwise, to explain the resulting improvements.

thomas_metzinger's picture
Professor of Theoretical Philosophy, Johannes Gutenberg-Universität Mainz; Adjunct Fellow, Frankfurt Institute for Advanced Study; Author, The Ego Tunnel

Thinking is not something you do. Most of the time it is something that happens to you. Cutting-edge research on the phenomenon of Mind Wandering now clearly shows how almost all of us, for more than two thirds of their conscious lifetime, are not in control of their conscious thought processes.

Western culture, traditional philosophy of mind and even cognitive neuroscience have been deeply influenced by the Myth of Cognitive Agency. It is the myth of the Cartesian Ego, the active thinker of thoughts, the epistemic subject that acts—mentally, rationally, in a goal-directed manner—and that always has the capacity to terminate or suspend its own cognitive processing at will. It is the theory that conscious thought is a personal-level process, something that by necessity has to be ascribed to you, the person as a whole. This theory has now been empirically refuted. As it now turns out, most of our conscious thoughts are actually the product of subpersonal processes, like breathing or the peristaltic movements in our gastrointestinal tract. The Myth of Cognitive Agency says that we are mentally autonomous beings. We can now see that this is an old, but self-complacent fairy tale. It is time to put it to rest.

Recent studies in the booming research field of Mind Wandering show that we spend roughly two thirds of our conscious life-time zoning out—daydreaming, lost in fantasies, autobiographical planning, inner narratives or depressive rumination. Depending on the study, 30-50% of our waking life is occupied by spontaneously occurring stimulus and task-unrelated thought. Mind Wandering probably has positive aspects too, because it is associated with creativity, careful future planning, or the encoding of long-term memories. But its overall performance costs (for example, in terms of reading comprehension, memory, sustained attention tasks, or working memory) are marked and have been well documented. So have its negative effects on general, subjective well-being. A wandering mind clearly is an unhappy mind, but it may only be part of a more comprehensive process beyond the conscious self’s control or understanding. The sudden loss of inner autonomy—which all of us experience many hundred times every day—seems to be based on a cyclically recurring process in the brain. The ebb and flow of autonomy and meta-awareness might well be a kind of attentional see-sawing between our inner and outer worlds, caused by a constant competition between the brain networks underlying spontaneous subpersonal thinking and goal-oriented cognition.

Mind Wandering is not the only way in which our attention gets decoupled from the perception of the Here and Now. There are also periods of "mind blanking", and these episodes may often not be remembered and also frequently escape detection by external observers. In addition, there is clearly complex, but uncontrollable cognitive phenomenology during sleep. Adults spend approximately 1.5– 2 h per night in REM sleep, experiencing dreams in which they are mostly unable to control their conscious thought process. NREM sleep yields similar, dream-like reports during stage 1, whereas other stages of NREM sleep are characterized by mostly cognitive/symbolic mentation—which is typically confused, non-progressive, and perseverative. A conservative estimate would therefore be that for much more than half of our life-time, we are not cognitive agents in the true sense of the word. This still excludes periods of illness, intoxication, or insomnia, in which people suffer from dysfunctional forms of cognitive control, such as thought suppression, worry, rumination, and counterfactual imagery and are plagued by intrusive thoughts, feelings of regret, shame, and guilt while. We do not yet know when and how children actually acquire a conscious self-model that permits controlled, rational thought. But another sad, yet empirically plausible assumption certainly is that most of us gradually lose cognitive autonomy toward the ends of our lives.

Interestingly, the neural correlate of non-autonomous conscious thought overlaps to a considerable degree with ongoing activity in what neuroscientists call the "default mode network". I think that one global function of Mind Wandering may be "autobiographical self-model maintenance". Mind Wandering creates an adaptive form of self-deception, namely, an illusion of personal identity across time. It helps to maintain a fictional "self" that then lays the foundation for important achievements like reward prediction or delay discounting. As a philosopher, my conceptual point is that only if an organism simulates itself as being one and the same across time will it be able to represent reward events or the achievement of goals as a fulfillment of its own goals, as happening to the same entity. I like to call this the "Principle of Virtual Identity Formation": Many higher forms of intelligence and adaptive behavior, including risk management, moral cognition and cooperative social behavior, functionally presuppose a self-model that portrays the organism as a single entity that endures over time. Because we are really only cognitive systems, complex processes without any precise identity criteria, the formation of an (illusory) identity across time can only be achieved on a virtual level, for example through the creation of an automatic narrative. This could be the more fundamental and overarching computational goal of mind wandering, and one it may share with dreaming. If I am right, the default mode of the autobiographical self-modeling constructs a domain-general functional platform enabling long-term motivation and future planning.

Mental autonomy (and how it can be improved) will be one of the hottest topics for the future. There is even a deep link between mental and political autonomy—you cannot sustain one without the other. Because there are not only bodily actions, but also mental actions, autonomy has to do with freedom—and in one of the deepest and most fundamental senses of the word. But the ability to act autonomously implies not only reasons, arguments and rationality. Much more fundamentally it refers to the capacity to wilfully inhibit, suspend, or terminate our own actions—bodily, socially, or mentally. The breakdown of this ability is what we call Mind Wandering. It is not an inner action at all, but a form of unintentional behavior, an involuntary form of mental activity.

nick_enfield's picture
Professor and Chair, Department of Linguistics, University of Sydney; Author, How We Talk

Suppose that a scientist wants to study a striking animal behavior; say, the courtship display of the stickleback fish, or the cooperative agriculture of leafcutter ants. She will, of course, ultimately want to know the underlying mechanisms of these behaviors: How do they work? How did they evolve? What can we learn from them? But no student of animal behavior would dream of asking these questions without first systematically discovering the facts; beginning with extensive field observation in the wild, then moving to experiments and modeling in the lab. Why, then, have linguists emphatically denied any value in directly observing linguistic behavior?

The culprit is a bad idea: that a science of language should be concerned only with competence (the mental capacity for producing sentences), and never with performance (what happens when we actually talk). Here is its decidedly dualist reasoning: When the idealized language patterns tucked away in the mind get 'externalized' in communication, they are filtered and shaped by contingencies such as motor constraints, attention and memory limitations, errors of execution, local conventions, and more. As a result, it is argued, performance bears little useful relation to the pre-defined underlying object of study: competence. Students of linguistics have been taught not to waste their time with the worldly facts of performance.

This idea belies an unaccountably narrow view of what language is. It has diverted linguists' attention from many substantial questions, each with deep implications. Just a few examples: Without looking at performance, we wouldn't see the systematic and ingenious ways in which people handle the constant speech errors, hesitations, and misfires of conversation, along with the social delicacies of navigating these bouts of turbulence. Without looking at performance, we would not be witnessing the emerging breakthroughs from statistical research on newly-available large language corpora, with results suggesting that we can infer competence from experience with performance. Nor, finally, would linguistics have a causal account of how languages evolve historically: In the cycle of language transmission going from public (someone speaks) to private (someone's mental state is affected) and back to public (that person speaks), and so on indefinitely, both the private domain of competence and the public domain of performance are equally indispensable.

Influential traditions in the discipline of linguistics have embraced an idea that makes little sense, given the fact that language is, after all, just another striking animal behavior. Instead, the science of language should begin with fieldwork observation, for performance is ultimately our only evidence for competence. Perhaps the most unfortunate outcome of this idea is that generations of linguists who have eschewed the study of performance now have nothing to say about the essentially social function of language, nor about those aspects of social agency, cooperation, and social accountability that universally define our species' unique communicative capacity.
 

stuart_pimm's picture
Doris Duke Chair of Conservation Ecology, Duke University; Author, The World According to Pimm: a Scientist Audits the Earth

Science and technology have made such spectacular improvements to our lives that it seems churlish to whinge about them. I understand the benefits better than most. My fieldwork is where "the other half" live—the majority of the world's population too poor to have access to safe drinking water, antibiotics, and much, if any, electricity. I can go home, flip a switch, turn on the tap, and carry Cipro wherever I go. Just as natural selection picks past winners, but brutally trims most mutations, so the science we love does not make every scientist in a white lab coat a hero. Many proposed scientific advances are narrow in their benefits, poorly thought out in the long-term, and attention getting or venally self-serving. Worst of all, optimism creates a moral hazard. When science promises it can fix everything, why worry if we break things?

For example, discussions about fracking, and the supplies of cheap fossil fuels it may give us, pit the local, near-term threats of a new technology against obvious benefits. For the USA, the energy is here and not in some politically sketchy country which requires vast military adventures to defend. Or to invade—for, surely, we would not have invaded Iraq if its principal export had been cantaloupes.

So bravo for fracking? Hardly! Suppose this, or any fossil fuel, were cheap and environmentally entirely free of local concerns. It would further accelerate global carbon emissions and their increasingly serious consequences. Perversely, the better—cleaner, cheaper, faster—is the technology, then the worse the eventual problem of too much atmospheric carbon dioxide. Surely, decades of cheap gas give us breathing space to develop and transition to sustainable energies? That's a gamble with disastrous consequences to our planet if we fail.

Won't new technologies soak up the carbon for us, allowing fossil fuels free reign? Only in the minds of those who seek huge research funds to pursue their ideas. The best and cheapest technology is what we ecologists call trees. Burning them contributes about 15% of the global carbon emissions, so reducing those—as Brazil has done so successfully in recent years—is altogether a good idea. Restoring deforested areas is also prudent and economical. Trees have been around since the Devonian.

Of the many dire effects of a much hotter planet, the irreversible losses are to the planet's biodiversity. Species extinction rates already run a thousand times higher than normal. Climate disruption will inflate them further. Optimists have the answer!

The purest hubris is to raise the dead. "De-extinction" seeks to resurrect individual extinct species, usually charismatic ones. You know the plot. In the movie Jurassic Park, a tree extinct for millions of years delights the paleobotanist. Then a sauropod eats its leaves. We then learn how to re-create it the animal. The movie is curiously silent on how to grow the tree, which at that size would be perhaps a hundred or more years old, and how to do so metaphorically overnight. To sustain a single sauropod, one would need thousands of trees, of many species, as well as their pollinators and perhaps their essential symbiotic fungi.

Millions of species risk extinction. De-extinction can only be an infinitesimal part of solving the crisis that now sees species of animals (some large but most tiny), plants, fungi, and microbes going extinct at a thousand times their natural rates.

Proponents of de-extinction claim that they only want to resurrect passenger pigeons and Pyrenean ibex, not dinosaurs. They make the assumption that the plants on which these animals depend still survive, so there is no need to resurrect them as well.  Indeed, botanic gardens worldwide have living collections of an impressively large fraction of the world's plants, some extinct in the wild, others soon to be so. Their absence from the wild is more easily fixed than the absence of animals, for which optimists tout de-extinction.

Perhaps so, but other practical problems abound: A resurrected Pyrenean ibex will need a safe home, not just its food plants. For those of us who attempt to reintroduce zoo-bred species that have gone extinct in the wild, one question tops the list: Where do we put them? Hunters ate this wild goat to extinction. Reintroduce a resurrected ibex to where it belongs and it will quickly become the most expensive cabrito ever eaten.

De-extinction is much worse than a waste: it sets up the expectation that biotechnology can repair the damage we're doing to the planet's biodiversity.

Fantasies of reclaiming extinct species are always seductive. "Real" scientists—those wearing white lab coats—use fancy machines with knobs and digital readouts to save the planet from humanity's excesses. There is none of the messy interactions with people, politics, and economics that characterise my world. There is nothing involving the real-world realities of habitat destruction, of the inherent conflict between growing human populations and wildlife survival. Why worry about endangered species? We can simply keep their DNA and put them back in the wild later.

"When I testify before Congress on endangered species, I'm always asked, "Can't we safely reduce the spotted owl to small numbers, keeping some in captivity as insurance?" The meaning is clear: "Let's log out almost all of western North America's old-growth forests because, if we can save species with high-tech solutions, the forest doesn't matter." Let's tolerate a high risk of extinction.

Conservation is about the ecosystems that species define and on which they depend. It's about finding alternative, sustainable futures for peoples, for forests, and for wetlands. Molecular gimmickry does not address these core problems.

We should not limit science. I celebrate its successes, too. The idea we should retire is that new, technically clever solutions suffice to fix our world. Common sense is necessary.

hans_ulrich_obrist's picture
Curator, Serpentine Gallery, London; Editor: A Brief History of Curating; Formulas for Now; Co-author (with Rem Koolhas), Project Japan: Metabolism Talks

Whilst studying political economy during the late 1980s, I was deeply inspired by the pioneer of ecology and economics Hans-Christoph Binswanger who is now in his eighties and is being rediscovered by younger artists and activists (e.g. Tino Sehgal) who often quote him as an influence.

The wisdom of Bingswanger's work is that he recognised early on that endless growth is unsustainable, both in human and planetary terms. The current focus in mainstream economics is, he argues, too much on labour and productivity and too little on natural and intellectual resources. Dependency on endless growth, as the crisis that always emerges at the end of each cyclical bull market should teach us, is unrealistic.

Binswanger's goal was to investigate the similarities and differences between aesthetic and economic values through an examination of the historical relationship between economics and alchemy, which he made as interesting as it (at first) sounds outlandish. In his 1985 book Money and Magic, he showed how the brash concept of unlimited growth was inherited from the medieval discourse of alchemy, the search for a process that could turn lead into gold.

A focus of Binswanger's research has been on Goethe, especially his role in shaping social economics while finance minister at the court of Weimar. In Goethe's Faust, the eponymous character thinks in terms of infinite progress, while Mephisto recognizes the destructive potential of such an idea. At the beginning of part two of the play, Mephistopheles urges the ruler of an empire that is facing financial ruin because of profligate government spending to issue promissory notes, thus solving its debt problems.

Binswanger had been fascinated by the Faust legend since his childhood, and during his studies, he discovered that Goethe's introduction of paper money into his play was inspired by the story of the Scottish economist John Law, who in 1716 was the first man to establish a French bank issuing paper money. Strikingly, after Law's innovation, the Duke of Orleans got rid of all his alchemists because he realized that the immediate availability of paper money was far more powerful than any attempt to turn lead into gold.

Binswanger also connects money and art in a novel way. Art, he points out, is based on imagination and is part of the economy, while a bank's process of creating money in the form of promissory notes or coins is connected to imagination, since it is based on a prospective idea of bringing into being something that has yet to exist. At the same time, a company imagines producing a certain good and needs money to realize this, so it takes out a loan from a bank. If the product is sold, the 'imaginary' money that was created in the beginning has a counter-value in real products.

In classical economic theory, this process can be continued endlessly. Binswanger recognizes in Money and Magic, that this endless growth exerts a quasi-magical fascination. He produces a way of thinking about the problems of rampant capitalist growth, encouraging us to question the mainstream theory of economics, and to recognize how it differs from the real economy. But instead of rejecting the market wholesale, he suggests ways in which to moderate its demands. Thus the market does not have to disappear or be replaced, but can be understood as something to be manipulated for human purposes, rather than obeyed.

Another way of interpreting Binswanger's ideas is as follows: for most of human history, a fundamental problem has been the scarcity of material goods and resources, and so we have become ever more efficient in our methods of production and created rituals to enshrine the importance of objects in our culture. Less than a century ago, human beings made a world-changing transition through their rapacious industry. We now inhabit a world in which the overproduction of goods, rather than their scarcity, is one of our most fundamental problems. Yet our economy functions by inciting us to produce more and more with each passing year. In turn, we require cultural forms to enable us to sort through the glut, and our rituals are once again being directed towards the immaterial, towards quality and not quantity. This requires a shift in our values, from producing objects to selecting amongst those that already exist.

ed_regis's picture
Science writer; Author, Monsters

In 1993, two Nobel prizewinning physicists, Steven Weinberg and Leon Lederman, each published books suggesting that a 54-mile-long particle accelerator, the Superconducting Super Collider (SSC), should be constructed near Waxahatchie, Texas, in order to discover the elusive Higgs scalar boson, which Lederman had semi-facetiously dubbed "the God particle." (The books were Dreams of a Final Theory, and The God Particle, respectively.) In a tour de force of bad timing, both books came out just as the United States Congress was in the process of terminating funding for the project once and for all.

Which was just as well: As it happened, the Higgs boson was discovered in 2012 by scientists working at a much smaller accelerator, the 17-mile-long Large Hadron Collider (LHC) at CERN, near Geneva.

As often happens in science, a new discovery simultaneously raises several new questions, which of course was also the case with the Higgs. For instance, Why did the Higgs particle have precisely the mass it had? Were there yet even more basic particles that lay beneath, and explained, certain attributes of the Higgs? Was there in fact more than one Higgs boson? In fundamental particle theory, unfortunately, the answers to such questions have become increasingly, and even prohibitively, expensive. Before it was cancelled, cost estimates for the SSC rose from an initial $3.9 billion to a final $11-billion-plus in 1991.

But how much is it really worth to know the answers to further questions regarding the Higgs particle? How much, if anything, would you pay to know those answers, assuming, optimistically, that you could even understand the questions, such as: How does the Higgs boson explain (if at all) the phenomenon of electroweak symmetry breaking? Science has long since reached the point where some types of new knowledge can be discovered only by building structures so absurdly cosmic, and even comic, in size as to have equally cosmic price tags. In light of this, it makes sense to ask whether the knowledge supposedly to be provided by these dollar-bill-destroying behemoths is in fact worth acquiring.

Apparently unfazed by Congressional rejection of the 54-mile-long, super-expensive Super Collider, a 2001 study group at Fermilab (whose accelerator was a relatively puny 4 miles around) seriously entertained the prospect of building a Very Large Hadron Collider (VLHC), a stupendous monster that would be fully 233 kilometers (145 miles) in circumference. This leviathan object would enclose an area that was larger than the state of Rhode Island by more than 400 square miles.

Then, in the summer of 2013, a year after the Higgs had been discovered at CERN, a group of particle physicists met at Minneapolis to propose a new, 62-mile-long collider that, they said, would allow "the study of indirect effects of new physics on the W and Z bosons, the top quark, and other systems." These proposals just keep coming, like spam, junk mail, or crabgrass. But sooner or later, enough has got to be enough, even in science, which, after all, is not sacrosanct. It's just silly to keep paying—forever, eternally, and in perpetuity—more and more money for less and less knowledge about hypothetical specks of matter that go so far beyond the infinitesimal as to border on sheer nothingness.

Fundamental particle physicists, evidently, have never heard of "limits to growth," or limits of any other kind. But they should certainly acquaint themselves with that concept, for the fundamental does not automatically trump the practical. Every dollar spent on a shiny new mega-collider is a dollar that can't be spent on other things, such as hospitals, vaccine development, epidemic prevention, disaster relief, and so on. Particle accelerators the size of small nations are arguably well over the financial horizon of what's reasonable to sacrifice for a given incremental advance in arcane, theoretical, almost cabalistic knowledge.

In a postmortem on the Superconducting Super Collider ("Good-bye to the SSC"), Daniel Kevles, a Caltech science historian, said that basic research in physics should be pursued, "But not at any price." I agree. Some scientific knowledge is simply not worth its cost. 

Whilst studying political economy during the late 1980s in St Gallen, I was deeply inspired by the pioneer of ecology and economics Hans-Christoph Binswanger (born 1929, Zürich). The director of the University of St Gallen's Institute for Economics and Ecology from 1962 to 1994, he is now in his eighties and is being rediscovered by younger artists and activists (e.g. Tino Sehgal) who often quote him as an influence.

The wisdom of Bingswanger's work is that he recognised early on that endless growth is unsustainable, both in human and planetary terms. The current focus in mainstream economics is, he argues, too much on labour and productivity and too little on natural and intellectual resources. Dependency on endless growth, as the crisis that always emerges at the end of each cyclical bull market should teach us, is unrealistic.

Binswanger's goal was to investigate the similarities and differences between aesthetic and economic values through an examination of the historical relationship between economics and alchemy, which he made as interesting as it (at first) sounds outlandish. In his 1985 book Money and Magic, he showed how the brash concept of unlimited growth was inherited from the medieval discourse of alchemy, the search for a process that could turn lead into gold.

A focus of Binswanger's research has been on Goethe, especially his role in shaping social economics while finance minister at the court of Weimar. In Goethe's Faust, the eponymous character thinks in terms of infinite progress, while Mephisto recognizes the destructive potential of such an idea. At the beginning of part two of the play, Mephistopheles urges the ruler of an empire that is facing financial ruin because of profligate government spending to issue promissory notes, thus solving its debt problems. Binswanger had been fascinated by the Faust legend since his childhood, and during his studies, he discovered that Goethe's introduction of paper money into his play was inspired by the story of the Scottish economist John Law, who in 1716 was the first man to establish a French bank issuing paper money. Strikingly, after Law's innovation, the Duke of Orleans got rid of all his alchemists because he realized that the immediate availability of paper money was far more powerful than any attempt to turn lead into gold.

Binswanger also connects money and art in a novel way. Art, he points out, is based on imagination and is part of the economy, while a bank's process of creating money in the form of promissory notes or coins is connected to imagination, since it is based on a prospective idea of bringing into being something that has yet to exist. At the same time, a company imagines producing a certain good and needs money to realize this, so it takes out a loan from a bank. If the product is sold, the 'imaginary' money that was created in the beginning has a counter-value in real products.

In classical economic theory, this process can be continued endlessly. Binswanger recognizes in Money and Magic, that this endless growth exerts a quasi-magical fascination. He produces a way of thinking about the problems of rampant capitalist growth, encouraging us to question the mainstream theory of economics, and to recognize how it differs from the real economy. But instead of rejecting the market wholesale, he suggests ways in which to moderate its demands. Thus the market does not have to disappear or be replaced, but can be understood as something to be manipulated for human purposes, rather than obeyed.

Another way of interpreting Binswanger's ideas is as follows: for most of human history, a fundamental problem has been the scarcity of material goods and resources, and so we have become ever more efficient in our methods of production and created rituals to enshrine the importance of objects in our culture. Less than a century ago, human beings made a world-changing transition through their rapacious industry. We now inhabit a world in which the overproduction of goods, rather than their scarcity, is one of our most fundamental problems. Yet our economy functions by inciting us to produce more and more with each passing year. In turn, we require cultural forms to enable us to sort through the glut, and our rituals are once again being directed towards the immaterial, towards quality and not quantity. This requires a shift in our values, from producing objects to selecting amongst those that already exist.

giulio_boccaletti's picture
Chief Strategy Officer of The Nature Conservancy; Author, Water: A Biography

When the ancient capital of the Nabataeans, Petra, was "re-discovered" by Johann Burckhardt in the early 1800's, it might have seemed unthinkable that anybody could have lived in such an arid place. Yet, at its peak in the first century BCE, Petra was the center of a powerful trading empire and home to more than 30,000 people.

Petra's very existence was a testament to how water management could support the development of civilization in the most extreme circumstances. This part of the world—today in the Hashemite Kingdom of Jordan—survives on less than 70 mm of rain a year, much of it concentrated in a few events in the rainy season. The climatology two thousand years ago was similar, yet Petra thrived thanks to a system of rock-cut underground cisterns, terraced slopes, dams, aqueducts, which stored and delivered water from springs and run-off flows. Petra could grow food, provide drinking water, and support a bustling city because of that infrastructure.

This story is not dissimilar to many other places across the world today, from the Western United States to Northern China, from South Africa to the Punjab—all have thrived and grown thanks to human ingenuity and water engineering, allowing people to overcome the adversity of a difficult—at times, impossible—hydrology.

Whether the Nabataean engineers knew it or not, to deliver reliable water infrastructure, they relied—like all water engineers since—on two commonly assumed properties of hydrological events: stationarity and, rather more esoterically, ergodicity. Both concepts have well defined mathematical meaning. Simply put though, stationarity implies that the probability distribution of a random event is independent of time, while a stationary process is ergodic if, given a sufficiently long time, it will realize most of the universe of options available to it.

Practically, this allows one to assume that if an event has been observed for long enough, then one will have also in all likelihood witnessed enough of its behavior to represent the underlying distribution function at any given point in time. In the case of hydrology, it is what allows us to define events by using time statistics, like the "one in a hundred years flood".

The assumption that hydrology can be represented by such stationary processes makes it possible to design infrastructure whose behavior can be expected to be known well into the future. After all, water infrastructure like dams, levees and so on last for decades, even centuries, so it is important that they are dimensioned to withstand most predictable events. This is what has allowed Nabataean, Chinese, America, South African and Indian water engineers to design water systems they could legitimately rely on. And they have been wildly successful, so far.

Stationarity provides a convenient simplifying gambit: that plans for future water management can be based on an appropriately long historical time series of hydrology past, because the past is simply a representative sequence of realizations of a (roughly) fixed probability distribution.

Simple. But of course in the real world—where there is no counterfactual and where a single experiment is running all the time—such assumptions are only true until proven wrong. We are now realizing that those assumptions are, in fact, wrong. Not just theoretically wrong, but practically flawed.

In the last few years, a growing number of observations have been substantiating the idea that probability distributions we assumed fixed are not. They are changing, and changing fast: many of what used to be one in a hundred year events are more likely to be one in twenty years; droughts that used to be considered extreme and very unlikely are now much more common. Accelerating changes in climate, coupled with a much more sensitive global economy in which many more people and much more value is at stake, are revealing that we actually do not live in a world as stationary as we thought. And infrastructure that had been designed for that world, intended to last for decades into the future is proving increasingly inadequate.

The implications are rather monumental for our relationship with the planet and its water resources. A broadly stationary environment can be "engineered away". Someone will take care of it, as long as we can define what we need and have enough resources to pay for it. In a non-stationary world, it is different. The problem of water management is no longer decoupled from the dynamics of climate, as the climatology is no longer constant on practical timescales. We face unforeseen variability, the past is no longer necessarily a guide to the future, and we cannot simply rely on "someone taking care of it". "It" is no longer just an engineering problem. Climatology, hydrology, ecology, and engineering all become relevant instruments in the management of a dynamic problem, whose nature requires adaptability and resilience, one in which our own economy should be prepared to adapt, because no long term piece of infrastructure can be expected to manage what it was not designed for.

By the first century CE, the Nabateans were incorporated in the Roman Empire and over the course of the subsequent centuries their civilization slowly withered away, the victim of changing trade routes and shifting geopolitics (and proof that while water can support the development of civilizations, it is far from sufficient to see them thrive!) Today we have hundreds of cities around the world that, just like Petra, rely on engineered water infrastructure to support their growth. From Los Angeles to Beijing, from Phoenix to Istanbul, great cities of the world depend on a reliable source of water in the face of unreliable hydrology.

If stationarity is indeed a thing of the past, water management is no longer a "white coats" business, something that can be taken care of in the background. We must consider choices, have contingency plans for events that we might not have experienced, and accept that we might get it wrong. In other words, we must go from managing water to managing risk.

nicholas_a_christakis's picture
Sterling Professor of Social and Natural Science, Yale University; Co-author, Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives

Ever since the landmark invention of diverse statistical techniques 100 years ago that allow us to properly compare the difference between the averages of two groups, we have deluded ourselves into thinking that it is such differences that are the salient—and often the only—important difference between groups. We have spent a century observing and interpreting such differences. We've become almost obsessed, and we should stop.

Yes, we can reliably say that men are taller than women, on average; that Norwegians are richer than Swedes; that first-born children are smarter than second-born children. And we can do experiments to detect tiny differences in means—between groups exposed and unexposed to a virus, or between groups with and without a particular allele of a gene. But this is too simple and too narrow a view of the natural world.

Our focus on averages should be retired. Or, if not retired, we should give averages an extended vacation. During this vacation, we should catch up on another sort of difference between groups that has gotten short shrift: we should focus on comparing the difference in variance (which captures the spread or range of measured values) between groups.

Part of the reason we've focused so much on the average is that the statistical tools for computing and comparing averages are so much easier and well developed. It is much harder to compare whether the variance of one group is different than the variance of another. But this calls to mind the well-known anecdote of the drunk searching for his keys on his knees under a lamp-post. His friend comes out of the bar and asks, "What are you doing on the ground?" "Searching for my keys," he replies. "Where did you lose them?" the friend asks. "Over there," the drunk says, pointing some distance away, "But the light is better here."

Drunk with statistical power, we've persuaded ourselves that the mean of a distribution is its most important property. But often it is not.

For example, we have focused on the differences in average wealth between groups—whether the US is richer than other countries, and what might have caused this, or whether bankers make more money than consultants and how this affects the professional choices of graduating college students. But the distribution of wealth in the groups may be equally important in explaining collective and individual outcomes and choices. Even if the US and Sweden have the same average income (roughly speaking), the variance in income is much higher in the US (income inequality is greater), and this fact, rather than any difference in means between the groups, may help explain what happens to people in these societies. For example, it may be the case that it is better for the health of a group, and (on average!) for the health of the individuals within it, for the group to have a more equal distribution of income—even if the average income is somewhat lower. We might wish for more equality at the expense of wealth.

Here is a hypothetical example leading to the opposite practical conclusion about inequality: when forming a crew of sailors for a sailboat, what would be best? To have all ten of the sailors have the same level of myopia, with mean vision of 20/200, or to have a group of sailors where nine of them had even worse vision, but one of them had perfect vision? The average vision could be the same in both groups, but, for the purposes of sailing the boat effectively, and for the survival of all aboard, it might be better to have more rather than less inequality. We might wish for more inequality at the expense of vision.

Or consider a medical example of how variance is important: there may be two conditions with equal average prognoses, say advanced AIDS and advanced liver cirrhosis, but doctors may offer "Do Not Resuscitate" orders to AIDS patients at much higher rates. It's tempting to conclude that doctors are more eager to avoid resuscitation in AIDS patients, perhaps for discriminatory reasons. But the real reason may be that the variance in survival in the AIDS group is much higher, and there may be many more patients in that group who will die imminently. It may be to this fact that the doctors are oriented rather than to the average survival of the two groups; the doctors may reason that they can wait to offer DNR orders to the cirrhosis patients.

A familiarity with variance would also allow us to make sense of the famously controversial hypothesis regarding why there are more male math professors at major universities: the mean overall math aptitude among men and women might be the same, but the variance in men might be higher. If so, this would mean that there are more men at the very bottom of the distribution (and, indeed, boys are roughly three times more likely to be mentally disabled than girls), but also that there are more men at the upper end of the distribution.

When we focus mainly on the mean, we miss the chance to observe interesting and important things about the world. And a restricted view has adverse practical as well as scientific implications. Do we want a richer, less equal society? Do we want educational programs to increase the equality of test scores, or the average? Will a cancer drug that makes some patients live longer and kills others sooner still be preferred by patients even if it has no effect on average survival? To really understand the relevant tradeoffs, we must acquire not only the tools, but also the vision, to focus on variance. 

laurie_r_santos's picture
Professor of Psychology, Director, Comparative Cognition Laboratory and the Canine Cognition Center, Yale University
tamar_gendler's picture
Professor of Philosophy and Cognitive Science, and Chair, Department of Philosophy; Deputy Provost for Humanities and Initiatives, Yale University

Children of the 1980's (like the younger of these two co-authors) may fondly remember a TV cartoon called G. I. Joe, whose closing conceit—a cheesy public service announcement—remains a much-parodied YouTube sensation almost thirty years later. Following each of these moralizing pronouncements came the show's famous epithet: "Now you know. And knowing is half the battle."

While there may be some domains where knowing is half the battle, there are many more where it is not. Recent work in cognitive science has demonstrated that knowing is a shockingly tiny portion of the battle for most real world decisions. You may know that $19.99 is pretty much the same price as $20.00, but the first still feels like a significantly better deal. You may know a prisoner's guilt is independent of whether you are hungry or not, but she'll still seem like a better candidate for parole when you've recently had a snack. You may know that a job applicant of African descent is as likely to be qualified as one of European descent, but the negative aspects of the former's resume will still stand out. And you may know that a tasty piece of fudge shaped like dogshit is will taste delicious, but you'll still be pretty hesitant to eat it.

The lesson of much contemporary research in judgment and decision-making is that knowledge— at least in the form of our consciously accessible representation of a situation—is rarely the central factor controlling our behavior. The real power of online behavioral control comes not from knowledge, but from things like situation selection, habit formation, and emotion regulation. This is a lesson that therapy has taken to heart, but one that "pure science" continues to neglect.

And so the idea that cognitive science needs to retire is what we'll call the G. I. Joe Fallacy: the idea that knowing is half the battle. It needs to be retired not just from our theories of how the mind works, but also from our practices of trying to shape minds to work better.

You might think that this is old news. After all, thinkers for the last 2500 years have been pointing out that much of human action isn't under rational control. Don't we know by now that the G.I Joe Fallacy is just that—a fallacy?

Well, yeah we know, but . . .

The irony is that knowing that the G.I. Joe Fallacy is a fallacy is—as the fallacy would predict—less than half the battle. As is knowing that people tend to experience $19.99 as a significantly lower price than $20.00. Even if you know about this left-digit anchoring effect, the first item will still feel like a significantly better deal. Even if you know about ego depletion effects, the prisoner you encounter after lunch will still seem like a better candidate for parole. Even if you know that implicit bias is likely to affect your assessment of a resume's quality, you will still experience the candidate with the African-American name as being less qualified than the candidate with the European-American name. And even if you know about Paul Rozin's disgust work, you will still hesitate to drink Dom Perignon out of a sterile toilet bowl.

Knowing is not half the battle for most cognitive biases, including the G. I. Joe Fallacy. Simply recognizing that the G. I. Joe Fallacy exists is not sufficient for avoiding its grasp.

So now you know. And that's less than half the battle. 

w_daniel_hillis's picture
Physicist, Computer Scientist, Co-Founder, Applied Invention.; Author, The Pattern on the Stone

We humans are fundamentally storytellers. We like to organize events into chains of causes and effects that explain the consequences of our actions. We like to assign credit and blame. This makes sense from an evolutionary standpoint. The ultimate job of our nervous system is to make actionable decisions, and predicting the consequences of those decisions is important to our survival.

Science is a rich source of powerful explanatory stories. For example, Newton explained how a force causes a mass to accelerate. This gives us a story of how an apple drops from a tree or a planet circles around the Sun. It allows us to decide how hard the rocket engine needs to push to get it to the Moon. Models of causation allow us to design complex machines like factories and computers that have fabulously long chains of causes and effects. They convert inputs into the outputs that we want.

It is tempting to believe that our stories of causes and effects are how the world works. Actually, they are just a framework that we use to manipulate the world and to construct explanations for the convenience of our own understanding. For example, Newton's equation, F= Ma, does not really say that force causes acceleration any more than it says that mass causes force. We humans tend to think of force as contingent, because we often have the choice as to whether to apply it or not. On the other hand, we tend to think of mass as not being under our control. Thus, we personify nature, imagining it almost as if natural forces are deciding to push on masses. It is much harder for us to imagine accelerations deciding to cause mass, so we tell the story a certain way. We credit gravitational force for keeping the planets orbiting around the Sun, and blame it for pulling the apple down from the tree.

This convenient personification of nature helps us use our mental storytelling machinery to explain the natural world. The cause-and-effect paradigm works particularly well when science is used for engineering, to arrange the world for our convenience. In this case, we can often set things up so that the illusion of cause-and-effect is almost a reality. The computer is a perfect example. The key to what makes a computer work is that the inputs affect the outputs, but not vice versa. The components used to construct the computer are constructed to create that same one-way relationship. These components, such as logic gates, are specifically designed to convert contingent inputs into predictable outputs. In other words, the logic gates of the computer are constructed to be atomic building blocks of cause-and-effect.

The notion of cause-and-effect breaks down when the parts that we would like to think of as outputs affect the parts that we would prefer to think of as inputs. The paradoxes of quantum mechanics are a perfect example of this, where our mere observation of a particle can "cause" a distant particle to be in a different state. Of course there is no real paradox here, there is just a problem with trying to apply our storytelling framework to a situation where it does not match.

Unfortunately, the cause-and-effect paradigm does not just fail at the quantum scale. It also falls apart when we try to use causation to explain complex dynamical systems like the biochemical pathways of a living organism, the transactions of an economy, or the operation of the human mind. These systems all have patterns of information flow that defy our tools of storytelling. A gene does not "cause" the trait like height, or a disease like cancer. The stock market did not go up "because" the bond market went down. These are just our feeble attempts to force a storytelling framework onto systems that do not work like stories. For such complex systems, science will need more powerful explanatory tools, and we will learn to accept the limits of our old methods of storytelling. We will come to appreciate that causes and effects do not exist in nature, that they are just convenient creations of our own minds.

 

michael_mccullough's picture
Professor of Psychology, Director, Evolution and Human Behavior Laboratory, University of Miami; Author, The Kindness of Strangers

Humans are biologically exceptional. We're exceptionally long-lived and exceptionally cooperative with non-kin. We have exceptionally small guts and exceptionally large brains. We have an exceptional communication system and an exceptional ability to learn from other members of our species. Scientists love to study biologically exceptional human traits such as these, and that's a perfectly reasonable research strategy. Human evolutionary exceptionalism, however—the tendency to assume that biologically exceptional human traits come into the world through exceptional processes of biological evolution—is a bad habit we need to break. Human evolutionary exceptionalism has sown misunderstanding in every area it has touched. Here are three examples.

Human niche construction. Humans have exerted biologically exceptional effects on their environments. In our evolutionary past, these so-called niche construction effects occasionally created the necessary and sufficient condition for natural selection: a generationally persistent covariance between genes and fitness. For example, earlier hominins' experimentation with cooking (which required the generationally persistent availability of culturally transmitted knowledge on how to control fire) made their food more digestible. Consequently, genetic mutations that shrank the human gut, teeth, and jaw muscles were naturally selected because they enabled resources to be re-assigned to the construction of new adaptive faculties (including cognitive ones).

For years, Niche Construction theorists have argued that standard evolutionary theory cannot account for such interactions between humans' culturally mediated environmental effects and natural selection. In response, they have promoted niche construction as a "neglected evolutionary process" that collaborates with natural selection to direct evolution. However, they obtain persuasive force for this argument by re-defining what evolution is. Humans' niche construction activities have undoubtedly exposed new covariances between genetic variation and fitness during human evolution, but those activities have neither created that variation nor filtered it, so they don't constitute an evolutionary process. Culturally mediated human niche construction is real, important, sometimes evolutionarily significant, and certainly worthy of study, but it doesn't compel a revision to our understanding of how evolution works.

Major Evolutionary Transitions. Over the past three billion years, natural selection has yielded several pivotal innovations in how genetic information gets assembled, packaged, and transmitted across generations. These so-called major evolutionary transitions have included the transition from RNA to DNA; the union of genes into chromosomes; the evolution of eukaryotic cells; the advent of sexual reproduction; the evolution of multicellular organisms; and the appearance of eusociality (notably, among ants, bees, and wasps) in which only a few individuals reproduce and the others work as servants, soldiers, or babysitters. The major evolutionary transitions concept, when properly applied, is useful and clarifying.

It is therefore regrettable that the concept's originators made category mistakes by characterizing two distinctly human traits as outcomes of major evolutionary transitions. Their first category mistake was to liken human societies (which are exceptional among the primates for their nested levels of organization, their mating systems, and a hundred other features) to those of the eusocial insects because the individuals in both kinds of societies "can survive and transmit genes . . . only as part of a social group." This is an unfortunate case of science by analogy: The fact that humans are adapted for living in social groups does not imply that they, like ants, bees, wasps, and termites, need groups to reproduce. If the chemistry, timing, and lighting are just right, any human male and any human female, plucked from their social groups at random, can manage to convey genetic information to the next generation just fine.

Their second category mistake was to hold up human language as the outcome of major evolutionary transition. To be sure, human language, as the only communication system with unlimited expressive potential that natural selection ever devised, is biologically exceptional. However, the information that language conveys is contained in our minds, not in our chromosomes. We don't yet know precisely where or when human language evolved, but we can be reasonably confident about how it evolved: via the gene-by-gene design process called natural selection. No major evolutionary transition was involved.

Human Cooperation. Humans are exceptionally generous, particularly toward non-relatives. We cooperate with strangers when we'd be better off in the short term by competing. We donate anonymously to charities. We accomplish group projects even though all participants surely recognize that they would be better off, at least in the short run, by loafing and letting the others do the work. We share with needy strangers even when we know they will never repay us. We praise generosity, and denounce stinginess, even when the behaviors in question have not affected us directly.

In the past, all of these cooperation-related phenomena spent time on evolutionary scientists' lists of "unsolved puzzles about human cooperation." The good news is that scientists have already succeeded in nudging many of them toward the "solved puzzles" list. The bad news is that some scholars have gone in the opposite direction: They have moved these problems onto the list of "mysteries"—problems so perplexing that we should abandon hope of ever solving them within the standard inclusive-fitness-maximizing view of natural selection. Their mystification has led them, at turns, to invoke evolutionary explanations that are inappropriate for species in which all individuals reproduce, to propose new evolutionary processes that are not evolutionary processes at all (but rather, proximate behavioral patterns that require evolutionary explanations), and to presume without justification that certain quirks of modern social life were selection pressures of our deep evolutionary past. Explaining the exceptional features of human cooperation is challenging enough without muddling the problem space even further with conceptual false starts, questionable historical premises, and labyrinthine evolutionary scenarios.

Human evolutionary exceptionalism is counterproductive for science. It leads to internecine squabbles. Correcting the misconceptions that follow in its wake distracts specialists from more productive work. Finally, it confuses non-specialists who lack the time to sort through these controversies for themselves. It's good to be curious—and, sometimes, even querulous—about how our biologically exceptional traits evolved, but we should resist the idea that evolution made up new rules just for us. 

gary_klein's picture
Senior Scientist, MacroCognition LLC; Author, Seeing What Others Don't: The Remarkable Ways We Gain Insights

Any enterprise has its limits and boundary conditions, and science is no exception. When the reach of science moves beyond these boundary conditions, when it demands respect and obedience that it hasn't earned, the results can be counter-productive. One example is Evidence-Based Medicine (EBM), which is the scientific idea that I think we should retire.

The concept behind EBM is certainly admirable: a set of best practices validated by rigorous experiments. EBM seeks to provide healthcare practitioners with treatments they can trust, treatments that have been evaluated by randomized controlled trials, preferably blinded. EBM seeks to transform medicine into a scientific discipline rather than an art form. What's not to like? We don't want to return to the days of quack fads and unverified anecdotes.

But we should only trust EBM if the science behind best practices is infallible and comprehensive, and that's certainly not the case. Medical science is not infallible. Practitioners shouldn't believe a published study just because it meets the criteria of randomized controlled trial design. Too many of these studies cannot be replicated. Sometimes the researcher got lucky and the experiments that failed to replicate the finding never got published or even submitted to a journal (the so-called publication bias). In rare cases the researcher has faked the results. Even when the results can be replicated they shouldn't automatically be believed—conditions may have been set up in a way that misses the phenomenon of interest so a negative finding doesn't necessarily rule out an effect.

And medical science is not comprehensive. Best practices often take the form of simple rules to follow, but practitioners work in complex situations. EBM relies on controlled studies that vary one thing at a time, rarely more than two or three. Many patients suffer from multiple medical problems, such as Type 2 diabetes compounded with asthma. The protocol that works with one problem may inappropriate for the others. EBM formulates best practices for general populations but practitioners treat individuals, and need to take individual differences into account. A treatment that is generally ineffective might still be useful for a sub-set of patients. Further, physicians aren't finished once they select a treatment; they often have to adapt it. They need expertise to judge whether a patient is recovering at an appropriate rate. Physicians have to monitor the effectiveness of a treatment plan and then modify or replace it if it isn't working well. A patient's condition may naturally fluctuate and physicians have to judge the treatment effects on top of this noisy baseline.

Sure, scientific investigations have done us all a great service by weeding out ineffective remedies. For example, a recent placebo-controlled study found that arthroscopic surgery provided no greater benefit than sham surgery for patients with osteoarthritic knees. But we also are grateful for all the surgical advances of the past few decades (e.g., hip and knee replacements, cataract treatments) that were achieved without randomized controlled trials and placebo conditions. Controlled experiments are therefore not necessary for progress in new types of treatments and they are not sufficient for implementing treatments with individual patients who each have unique profiles.

Worse, reliance on EBM can impede scientific progress. If hospitals and insurance companies mandate EBM, backed up by the threat of lawsuits if adverse outcomes are accompanied by any departure from best practices, physicians will become reluctant to try alternative treatment strategies that have not yet been evaluated using randomized controlled trials. Scientific advancement can become stifled if front-line physicians, who blend medical expertise with respect for research, are prevented from exploration and are discouraged from making discoveries. 

jonathan_gottschall's picture
Distinguished Research Fellow, English Department, Washington & Jefferson College; Author, The Storytelling Animal

Fifteen thousand years ago in France, a sculptor swam and slithered almost a kilometer down into a mountain cave. Using clay, the artist shaped a big bull rearing to mount a cow, and then left his creation in the bowels of the earth. The two bison of the Tuc D'Audoubert caves sat undisturbed for many thousands of years until they were rediscovered by spelunking boys in 1912. The discovery of the clay bison was one of many shocking 20th century discoveries of sophisticated cave art stretching back tens of thousands of years. The discoveries overturned our sense of what our caveman ancestors were like. They were not furry, grunting troglodytes. They had artistic souls. They showed us that humans are—by nature, not just by culture—art-making, art-consuming, art-addicted apes.

But why? Why did the sculptor burrow into the earth, make art, and leave it there in the dark? And why does art exist in the first place? Scholars have spun a lot of stories in answer to such questions, but the truth is that we really don't know. And here's one reason why: science is lying down on the job.

A long time ago someone proclaimed that art could not be studied scientifically, and for some reason almost everyone believed it. The humanities and sciences constituted, as Stephen Jay Gould might have proclaimed, separate, non-overlapping magisteria--the tools of the one are radically unsuited to the other.

Science has mostly bought into this. How else can we explain its neglect of the arts? People live in art. We read stories, and watch them on TV, and listen to them in song. We make paintings and gaze at them on walls. We beautify our homes like bowerbirds adorning nests. We demand beauty in the products we buy, which explains the gleam of our automobiles and the sleek modernist aesthetic of our iPhones. We make art out of our own bodies: sculpting them through diet and exercise; festooning them with jewelry and colorful garments; using our skins as living canvas for the display of tattoos. And so it is the world over. As the late Denis Dutton argued in The Art Instinct, underneath the cultural variations, "all human beings have essentially the same art."

Our curious love affair with art sets our species apart as much as our sapience or our language or our use of tools. And yet we understand so little about art. We don't know why art exists in the first place. We don't know why we crave beauty. We don't know how art produces its effects in our brains—why one arrangement of sound or color pleases while another cloys. We don't know very much about the precursors of art in other species, and we don't know when humans became creatures of art. (According to one influential theory, art arrived fifty thousand years ago with a kind of creative big bang. If that's true, how did that happen?). We don't even have a good definition, in truth, for what art is. In short, there is nothing so central to human life that is so incompletely understood.

Recent years have seen more use of scientific tools and methods in humanities subjects. Neuroscientists can show us what's happening in the brain when we enjoy a song or study a painting. Psychologists are studying the ways novels and TV shows shape our politics and our morality. Evolutionary psychologists and literary scholars are teaming up to explore narrative's Darwinian origins. And other literary scholars are developing a "digital humanities," using algorithms to extract big data from digitized literature. But scientific work in the humanities has mainly been scattered, preliminary, and desultory. It does not constitute a research program.

If we want better answers to fundamental questions about art, science must jump in the game with both feet. Going it alone, humanities scholars can tell intriguing stories about the origins and significance of art, but they don't have the tools to patiently winnow the field of competing ideas. That's what the scientific method is for: separating the stories that are more accurate, from the stories that are less accurate. But make no mistake, a strong science of art will require both the thick, granular expertise of humanities scholars and the clever hypothesis testing of scientists. I'm not calling for a scientific takeover of the arts. I'm calling for a partnership.

This partnership faces great obstacles. There's the unexamined assumption that something in art makes it science-proof. There's a widespread, if usually unspoken, belief that art is just a frill in human life—relatively unimportant compared to the weighty stuff of science. And there's the weird idea that science necessarily destroys the beauty it seeks to explain (as though a learned astronomer really could dull the star shine). But the Delphic admonition "know thyself" still rings out as the great prime directive of intellectual inquiry, and there will always be a gaping hole in human self-knowledge until we develop a science of art.

Humans are biologically exceptional. We're exceptionally long-lived and exceptionally cooperative with non-kin. We have exceptionally small guts and exceptionally large brains. We have an exceptional communication system and an exceptional ability to learn from other members of our species. Scientists love to study biologically exceptional human traits such as these, and that's a perfectly reasonable research strategy. Human evolutionary exceptionalism, however—the tendency to assume that biologically exceptional human traits come into the world through exceptional processes of biological evolution—is a bad habit we need to break. Human evolutionary exceptionalism has sown misunderstanding in every area it has touched. Here are three examples.

Human niche construction. Humans have exerted biologically exceptional effects on their environments. In our evolutionary past, these so-called niche construction effects occasionally created the necessary and sufficient condition for natural selection: a generationally persistent covariance between genes and fitness. For example, earlier hominins' experimentation with cooking (which required the generationally persistent availability of culturally transmitted knowledge on how to control fire) made their food more digestible. Consequently, genetic mutations that shrank the human gut, teeth, and jaw muscles were naturally selected because they enabled resources to be re-assigned to the construction of new adaptive faculties (including cognitive ones).

For years, Niche Construction theorists have argued that standard evolutionary theory cannot account for such interactions between humans' culturally mediated environmental effects and natural selection. In response, they have promoted niche construction as a "neglected evolutionary process" that collaborates with natural selection to direct evolution. However, they obtain persuasive force for this argument by re-defining what evolution is. Humans' niche construction activities have undoubtedly exposed new covariances between genetic variation and fitness during human evolution, but those activities have neither created that variation nor filtered it, so they don't constitute an evolutionary process. Culturally mediated human niche construction is real, important, sometimes evolutionarily significant, and certainly worthy of study, but it doesn't compel a revision to our understanding of how evolution works.

Major Evolutionary Transitions. Over the past three billion years, natural selection has yielded several pivotal innovations in how genetic information gets assembled, packaged, and transmitted across generations. These so-called major evolutionary transitions have included the transition from RNA to DNA; the union of genes into chromosomes; the evolution of eukaryotic cells; the advent of sexual reproduction; the evolution of multicellular organisms; and the appearance of eusociality (notably, among ants, bees, and wasps) in which only a few individuals reproduce and the others work as servants, soldiers, or babysitters. The major evolutionary transitions concept, when properly applied, is useful and clarifying.

It is therefore regrettable that the concept's originators made category mistakes by characterizing two distinctly human traits as outcomes of major evolutionary transitions. Their first category mistake was to liken human societies (which are exceptional among the primates for their nested levels of organization, their mating systems, and a hundred other features) to those of the eusocial insects because the individuals in both kinds of societies "can survive and transmit genes . . . only as part of a social group." This is an unfortunate case of science by analogy: The fact that humans are adapted for living in social groups does not imply that they, like ants, bees, wasps, and termites, need groups to reproduce. If the chemistry, timing, and lighting are just right, any human male and any human female, plucked from their social groups at random, can manage to convey genetic information to the next generation just fine.

Their second category mistake was to hold up human language as the outcome of major evolutionary transition. To be sure, human language, as the only communication system with unlimited expressive potential that natural selection ever devised, is biologically exceptional. However, the information that language conveys is contained in our minds, not in our chromosomes. We donxt yet know precisely where or when human language evolved, but we can be reasonably confident about how it evolved: via the gene-by-gene design process called natural selection. No major evolutionary transition was involved.

Human Cooperation. Humans are exceptionally generous, particularly toward non-relatives. We cooperate with strangers when we'd be better off in the short term by competing. We donate anonymously to charities. We accomplish group projects even though all participants surely recognize that they would be better off, at least in the short run, by loafing and letting the others do the work. We share with needy strangers even when we know they will never repay us. We praise generosity, and denounce stinginess, even when the behaviors in question have not affected us directly.

In the past, all of these cooperation-related phenomena spent time on evolutionary scientists' lists of "unsolved puzzles about human cooperation." The good news is that scientists have already succeeded in nudging many of them toward the "solved puzzles" list. The bad news is that some scholars have gone in the opposite direction: They have moved these problems onto the list of "mysteries"—problems so perplexing that we should abandon hope of ever solving them within the standard inclusive-fitness-maximizing view of natural selection. Their mystification has led them, at turns, to invoke evolutionary explanations that are inappropriate for species in which all individuals reproduce, to propose new evolutionary processes that are not evolutionary processes at all (but rather, proximate behavioral patterns that require evolutionary explanations), and to presume without justification that certain quirks of modern social life were selection pressures of our deep evolutionary past. Explaining the exceptional features of human cooperation is challenging enough without muddling the problem space even further with conceptual false starts, questionable historical premises, and labyrinthine evolutionary scenarios.

Human evolutionary exceptionalism is counterproductive for science. It leads to internecine squabbles. Correcting the misconceptions that follow in its wake distracts specialists from more productive work. Finally, it confuses non-specialists who lack the time to sort through these controversies for themselves. It's good to be curious—and, sometimes, even querulous—about how our biologically exceptional traits evolved, but we should resist the idea that evolution made up new rules just for us. 

Fifteen thousand years ago in France, a sculptor swam and slithered almost a kilometer down into a mountain cave. Using clay, the artist shaped a big bull rearing to mount a cow, and then left his creation in the bowels of the earth. The two bison of the Tuc D'Audoubert caves sat undisturbed for many thousands of years until they were rediscovered by spelunking boys in 1912. The discovery of the clay bison was one of many shocking 20th century discoveries of sophisticated cave art stretching back tens of thousands of years. The discoveries overturned our sense of what our caveman ancestors were like. They were not furry, grunting troglodytes. They had artistic souls. They showed us that humans are—by nature, not just by culture—art-making, art-consuming, art-addicted apes.

But why? Why did the sculptor burrow into the earth, make art, and leave it there in the dark? And why does art exist in the first place? Scholars have spun a lot of stories in answer to such questions, but the truth is that we really don't know. And here's one reason why: science is lying down on the job.

A long time ago someone proclaimed that art could not be studied scientifically, and for some reason almost everyone believed it. The humanities and sciences constituted, as Stephen Jay Gould might have proclaimed, separate, non-overlapping magisteria--the tools of the one are radically unsuited to the other.

Science has mostly bought into this. How else can we explain its neglect of the arts? People live in art. We read stories, and watch them on TV, and listen to them in song. We make paintings and gaze at them on walls. We beautify our homes like bowerbirds adorning nests. We demand beauty in the products we buy, which explains the gleam of our automobiles and the sleek modernist aesthetic of our iPhones. We make art out of our own bodies: sculpting them through diet and exercise; festooning them with jewelry and colorful garments; using our skins as living canvas for the display of tattoos. And so it is the world over. As Denis Dutton argues in The Art Instinct, underneath the cultural variations, "all human beings have essentially the same art."

Our curious love affair with art sets our species apart as much as our sapience or our language or our use of tools. And yet we understand so little about art. We don't know why art exists in the first place. We don't know why we crave beauty. We don't know how art produces its effects in our brains—why one arrangement of sound or color pleases while another cloys. We don't know very much about the precursors of art in other species, and we don't know when humans became creatures of art. (According to one influential theory, art arrived fifty thousand years ago with a kind of creative big bang. If that's true, how did that happen?). We don't even have a good definition, in truth, for what art is. In short, there is nothing so central to human life that is so incompletely understood.

Recent years have seen more use of scientific tools and methods in humanities subjects. Neuroscientists can show us what's happening in the brain when we enjoy a song or study a painting. Psychologists are studying the ways novels and TV shows shape our politics and our morality. Evolutionary psychologists and literary scholars are teaming up to explore narrative's Darwinian origins. And other literary scholars are developing a "digital humanities," using algorithms to extract big data from digitized literature. But scientific work in the humanities has mainly been scattered, preliminary, and desultory. It does not constitute a research program.

If we want better answers to fundamental questions about art, science must jump in the game with both feet. Going it alone, humanities scholars can tell intriguing stories about the origins and significance of art, but they don't have the tools to patiently winnow the field of competing ideas. That's what the scientific method is for: separating the stories that are more accurate, from the stories that are less accurate. But make no mistake, a strong science of art will require both the thick, granular expertise of humanities scholars and the clever hypothesis testing of scientists. I'm not calling for a scientific takeover of the arts. I'm calling for a partnership.

This partnership faces great obstacles. There's the unexamined assumption that something in art makes it science-proof. There's a widespread, if usually unspoken, belief that art is just a frill in human life—relatively unimportant compared to the weighty stuff of science. And there's the weird idea that science necessarily destroys the beauty it seeks to explain (as though a learned astronomer really could dull the star shine). But the Delphic admonition "know thyself" still rings out as the great prime directive of intellectual inquiry, and there will always be a gaping hole in human self-knowledge until we develop a science of art.

azra_raza_md's picture
Chan Soon-Shiong Professor of Medicine, Columbia University Medical Center; Author, The First Cell

An obvious truth, which is either being ignored or going unaddressed in cancer research, is that mouse models do not mimic human disease well and are essentially worthless for drug development. We cured acute leukemia in mice in 1977 with drugs that we are still using in exactly the same dosage and duration today in humans—with dreadful results. Imagine the artificiality of taking human tumor cells, growing them in lab dishes, then transferring them to mice whose immune systems have been compromised so they cannot reject the implanted tumors, and then exposing these "xenografts" to drugs whose killing efficiency and toxicity profiles will then be applied to treat human cancers. The inherent pitfalls of such an entirely synthesized unnatural model system have also plagued other disciplines.

A recent scientific paper showed that nearly 150 drugs tested, at the cost of billions of dollars, in human trials of sepsis failed because the drugs had been developed using mice. Unfortunately, what looks like sepsis in mice turned out to be very different from what sepsis is in humans. Coverage of this study by Gina Kolata in the New York Times brought heated response from within the biomedical research community: "There is no basis for leveraging a niche piece of research to imply that mice are useless models for all human diseases. . . . The key is to construct the appropriate mouse models and design the experimental conditions that mirror the human situation."

The problem is that there are no appropriate mouse models which can "mirror the human situation." So why is the cancer research community still dominated by the dysfunctional tradition of employing mouse models to test hypotheses for development of new drugs?

Robert Weinberg of the Whitehead Institute at MIT has provided the best answer. In an interview, he offered two reasons. First, there's no other model with which to replace that poor mouse, and second, the FDA "has created inertia because it continues to recognize these [models] as the gold standard for predicting the utility of drugs."

There is a third reason related more to the frailties of human nature. Too many eminent laboratories and illustrious researchers have devoted entire lives to studying malignant diseases in mouse models, and they are the ones reviewing one another's grants and deciding where the NIH money gets spent. They are not prepared to concede that mouse models are basically valueless for most cancer therapeutics.

One of the main reasons we continue to stick to this archaic ethos is to obtain funding. Here is one example: I decided to study a bone marrow malignant disease called myelodysplastic syndromes (MDS), which frequently evolves to acute leukemia, back in the early 1980s. One early decision I made was to concentrate my research on freshly obtained human cells and not to rely on mice or petri dishes alone. In the last three decades, I have collected over 50,000 bone-marrow biopsies, normal control buccal smear cells, and blood, serum, and plasma samples in a well-annotated tissue repository backed by a computerized bank of clinical, pathologic, and morphologic data. Using these samples, we have identified novel genes involved in causing certain types of MDS, as well as sets of genes related to survival, natural history of the disease, and response to therapy. But when I used bone-marrow cells from treated MDS patients to develop a genomic-expression profile that was startlingly predictive of response and applied for an NIH grant to validate the signature, the main criticism was that before confirming it through a prospective trial in humans, I should first reproduce it in mice.

It's time to let go of the mouse models—at least, as surrogates for bringing drugs to the bedside. Remember what Mark Twain said: "What gets us into trouble is not what we don't know; it's what we know for sure that just ain't so."

mihaly_csikszentmihalyi's picture
Psychologist; Director, Quality of Life Research Center, Claremont Graduate University; Author, Flow

Note that in the quote in this year's Edge Question, Max Planck speaks of scientific truths "triumphing". Truths don't triumph, the people who propose them do. What needs to be retired is the faith that what scientists say are objective truths, with a reality independent of scientific claims. Some are indeed true, but others depend on so many initial conditions that they straddle the boundary between reality and fiction.

A good chess move allows a player to triumph over his opponent. Does that mean that the move is triumphant? Maybe it is in chess. We can only wish that the triumphs of science will be as innocuous.

alex_sandy_pentland's picture
Professor of Computer Science, MIT; Director, MIT Connection Science and Human Dynamics labs; Author, Social Physics

Researchers argue about the extent to which people are rational, but the real problem with the concept of the rational individual is that our desires, preferences and decisions are not primarily the result of individual thinking. Because economics and much of cognitive science takes the unit of analysis to be an independent individual, they have difficulty accounting for social phenomena such as financial bubbles, political movements, panics, technology trends, or even the course of scientific progress.

Near the end of the 1700s, philosophers began to declare that humans were rational individuals. People were flattered by being recognized as individuals, and by being called rational, and the idea soon wormed its way into the belief systems of nearly everyone in upper-class Western society. Despite resistance from church and state, this idea of rational individuality replaced the assumption that truth only came from god and king. Over time, the ideas of rationality and individualism changed the entire belief system of Western intellectual society, and today it is doing the same to the belief systems of other cultures.

Recent research data from my lab and other labs are changing this argument, and we are now coming to realize that human behavior is determined as much by social context as by rational thinking or individual desires. Rationality, as economists use the term, means that an individual knows what he or she wants and acts to get it. But this new research shows that in this regard, social network effects often, and perhaps typically, dominate both the desires and the decisions about how individuals act.

Recently, economists have moved toward the idea of "'bounded rationality'," which means that we have biases and cognitive limitations that prevent us from realizing full rationality. Our dependence on social interactions, however, is not simply a bias or a cognitive limitation. Social learning is an important method of enhancing individual decision-making. Similarly, social influence is central to constructing the social norms that enable cooperative behavior. Our ability to survive and prosper is due to social learning and social influence at least as much as it is due to individual rationality.

These data tell us that what we want and value, as well as how we choose to act in order to obtain our desires, are a constantly evolving property of interactions with other people. Our desires and preferences are mostly based on what our peer community agrees is valuable, rather than on rational reflection based directly on our individual biological drives or inborn morals.

For instance, after the Great Recession of 2008, when many houses were suddenly worth less than their mortgages, researchers found that it only took a few people walking away from their houses and mortgages to convince many of their neighbors to do the same thing. A behavior that had previously been thought nearly criminal or immoral, i.e., purposely defaulting on a mortgage, now became common. Using the terminology of economics, in most things we are collectively rational, and only in some areas are we individually rational.

By mathematically modeling the social learning and social pressure between people my colleagues and I have been able to accurately model and predict crowd phenomena such as this cascade of mortgage defaulting. Importantly, we have also found that it is possible to shape real-world crowd behaviors by using social network incentives that alter the connections between people, and that these social incentives are much more effective than standard individual economic incentives. In one particularly striking example we were able to use social network incentives to deflate a 'groupthink' bubble among foreign exchange traders and consequently double the return on investment of the individual traders.

So instead of individual rationality I believe that we have common sense. The collective intelligence of a community comes from the surrounding flow of ideas and examples; we learn from others in our environment, and these others learn from us. Over time, a community with members who actively engage with each other creates a group with shared, integrated habits and beliefs. When the flow of ideas incorporates a constant stream of outside ideas as well, then the individuals in the community make better decisions than they could on their own.

This idea of a collective intelligence that develops within communities is an old one; indeed, it is embedded in the English language. Consider the word "kith," familiar to modern English speakers from the phrase "kith and kin." Derived from old English and old German words for knowledge, kith refer to a more or less cohesive group with common beliefs and customs. These are also the roots for "couth," which means possessing a high degree of sophistication, as well as its more familiar counterpart, uncouth. Thus, our kith is the circle of peers (not just friends) from whom we learn the "correct" habits of action.

Our ancestors understood that our culture and the habits of our society are social contracts, and that both depend primarily upon social learning. As a result, observing the attitudes, actions, and outcomes of peers, rather than by using logic or argument is how we learn most of our public beliefs and habits. Learning and re-enforcing this social contract is what enables a group of people to coordinate their actions effectively. It is time that we dropped the fiction of individuals as the unit of rationality, and recognized that we are embedded in the surrounding social fabric.

luca_de_biase's picture
Journalist; Editor, Nova 24, of Il Sole 24 Ore

The tragedy of the commons is at an end, thanks to the writings of the late Nobel laureate Elinor Ostrom. But the well-deserved funeral has not been celebrated, yet. Thus, some consequences of the now disproved theory proposed by Garrett Hardin in his famous 1968 article are still to be fully digested. Which is urgent, because some major problems we face in our age are very much related to the commons: climate change, the issue of privacy and freedom on the Internet, the choice between copyright or public domain in scientific knowledge.

Of course, the commons can be over-exploited. But what's wrong with Hardin's theory is the notion of "tragedy": by using that term, Hardin implied that a sort of destiny condemned the commons to be depleted. In Hardin's opinion, a big enough set of rational individuals that are free to choose will act in a way that will inevitably bring to the exhaustion of the commons: because free rational individuals will always maximize their private advantage and collectivize the costs. Ostrom has demonstrated that this tragic destiny doesn't need to be true: she found all over the world an impressive number of cases in which communities run the commons in a sustainable way, getting the most out of them without depleting them.

Ostrom's factual approach to the commons came with very good theory, too. Preconditions to the commons' sustainability were, in Ostrom's idea: clarity of the law, methods of collective and democratic decision-making, local and public mechanisms of conflict resolution, no conflicts with different layers of government. These preconditions do exist in many historically proven situations and there is no tragedy there. Cultures that understand the commons are contexts that make a sustainable behaviour absolutely rational.

Hardin's approach, which he developed during the Cold War, was probably biased by ideological dualism. The commons didn't fit, as Ostrom writes, in a "dichotomous world of 'the market' and 'the state'." In a context in which "private property and deregulation" versus "state owned resources and regulation" were seen as the only two possible solutions, the commons were seen as a losing system condemned to become an idea of the past.

But the Internet has grown to become the biggest commons of knowledge in history. It would be very difficult to argue that the Internet is a losing system. In the last twenty years, the commons of the Internet have changed the world. Of course, the Internet can be over-exploited, by private gigantic companies or by state owned secret services. But there is no tragic destiny that condemns the Internet to be ruined. To save it, we can restart by understanding and preserving its clear rules, such as net-neutrality, its multi-stakeholder governance, its transparent way to enforce those rules and governance. Wikipedia has demonstrated that it is possible.

There is no tragedy: there are conflicts though. And they can be better understood by embracing a vision that is open to Ostrom's notion of polycentric governance of complex economic systems. The danger of a closed vision that only understands conflicts between state regulation and market freedom seem to be even more catastrophic when thinking at climate change and other environmental issues. When we think about the environment, the commons idea seems to be a much more generative notion than many other solutions. It is not a guarantee for a solution, but it is better point to start. The theory of "the tragedy of the commons" has now clearly become a comedy. But it can be a really sad comedy if we don't finish with it and move on.  

aubrey_de_grey's picture
Gerontologist; Chief Science Officer, SENS Foundation; Author, Ending Aging

From top to bottom of the profession, scientists are forsaking their chosen vocation in greater numbers than ever before, in favour of a more dependable and less stressful source of income. What is the basis of this stress and uncertainty, which so severely depletes the ranks of that indispensable community who seek to further humanity's understanding of nature, and thereby our ability to manipulate nature for the greater good? At the sharp end, it is the members of those ranks—scientists themselves—via the convention of apportioning funding by peer review of grant applications.

Only at the sharp end, of course: I certainly do not lay blame at scientists' feet. In fact, I don't really lay blame anywhere: the issue is that the prevailing system evolved in a different time, and in circumstances to which it was well suited, but has signally failed to adapt—indeed, has shown itself intrinsically non-adaptable—to present conditions. What is needed is a replacement system, which solves the problems that everyone in science agrees exist today but which still distributes funds according to metrics that all constituencies agree is fair.

The basic obstacle to doing this is that the overall merit of the contemporary peer-review system is apparently a local maximum: numerous tweaks have been proposed, but all have resisted adoption because they do more harm than good. But is it a global maximum: is it, as Churchill described democracy, the worst option except for all the others, or could a radical departure rank more highly by all key measures? Here I sketch a possible option. I am not sure it ticks all the boxes (though I do quite like it), but I do claim it shows sufficient promise as a candidate that the scientific community should no longer acquiesce in the current system on the assumption that nothing better is possible.

First, briefly: what's so wrong with peer review of grant applications these days? Two words: pay line. Peer review evolved when the balance between supply and demand of public research funds was such that at least 30% of applications could be funded. It worked well: if you didn't really know how to design a project, or how to communicate its value to your colleagues, or how to perform it economically, these failings would emerge and you would learn how to avoid them until eventually those colleagues would recommend to the government that you be given your chance. But these days, the corresponding percentage is typically in single digits. Does that mean you just have to be really good? I wish.

What it actually means is that you have to be not only really good but also really persistent, and moreover—and this is by far the worst aspect—really, really convincing in your argument that the project will succeed. What's so bad about that? Simply that some projects are (much) easier than others, and the hard ones tend to be those that determine the long-term rate of progress of a discipline, even though they have a significant failure rate. As such, a system that overwhelmingly neglects high-risk high-gain work hugely slows scientific progress, with catastrophic consequences for humanity. Also, cross-disciplinary research—work drawing together ideas not previously combined, which historically has been also exceptionally fruitful—is almost impossible to get funded, simply because no research panel ("study section", in NIH vernacular) has the necessary range of expertise to understand the proposal's full value.

I claim that this would be largely solved by a system based on peer recognition rather than peer review. When a scientist first applies for public research funds, his or her career would be divided into five-year periods, starting with the past five years (period 0), the coming five (period 1), etc. Period 1 is funded at a low, entry-level rate on the basis of simple qualifications (possession of a doctorate, number of years of postdoctoral study, etc), and without the researcher having provided any description of what specific research is to be undertaken. Period 2's funding level is determined, as a percentage of total funds available for the scientist's discipline of choice, again without any description of what work is planned to be performed, but instead on the basis of how well cited was his or her work performed in period 0.

This decision is made at the end of period 1 year 4, based on all citations since period 0 year 2 (so a total of eight years) to papers published in period 0 year 2 through period 1 year 1 (five years, approximating the interval when work done during period 0 will have been published). Citations are weighted according to whether one is a first/senior/middle author; self-citations are not counted; only papers reporting new research that depended on research funds are counted. Consideration is given to seniority and level of funding during the relevant period, according to a formula applied across the board rather than by discretion. Funding for period 3 is determined similarly, at the end of period 2 year 4, on the basis of work performed during period 1, and so on. Flexibility is incorporated concerning front-loading of funds to year 1 of a given period, to allow for large capital expenditures.

This improves on the current system in many ways. Zero time is spent preparing and submitting (and re-submitting…) descriptions of proposed research, and zero money on evaluating such proposals. Bias against high-risk high-gain work is greatly reduced, both by the lack of peer review and also because funding periods exceed the currently-typical three years. Significance of past work is evaluated after an appropriate period of time, not by such "first-impression" measures as the impact factor of journals where one has just published. One can also split one's application across multiple disciplines, with funding level from each discipline proportionated accordingly, removing the bias against cross-disciplinary research. Finally, one has a year at the end of a period to plan what work one will do in the next period, in full knowledge of what resources will be at one's disposal.

Researchers are of course free to seek additional funds from elsewhere (and indeed, some public funds could still be apportioned via the traditional method). Thus, this need not even be a particularly massive dislocation: it could easily be phased in.

Worth considering?

rebecca_newberger_goldstein's picture
Philosopher, Novelist; Recipient, 2014 National Humanities Medal; Author, Plato at the Googleplex; 36 Arguments for the Existence of God: A Work of Fiction

The obsolescence of philosophy is often taken to be a consequence of science. After all, science has a history of repeatedly inheriting—and definitively answering—questions over which philosophers have futilely hemmed and hawed for unconscionable amounts of time. It's been that way from the beginning. Those irrepressible ancient Greeks, Thales & Co., in speculating about the ultimate constituents of the physical world and the laws that govern its changes, were asking questions that awaited answers from physics and cosmology. And so it has gone, science transforming philosophy's vagaries into empirically testable theories, right down to our own scientifically explosive period, when the advancement of cognitive and affective neuroscience has brought such questions as the nature of consciousness, of free will, and of morality—those perennials of the philosophy curriculum—under the gaze of fMRI-enhanced scientists. Philosophy's role in the business of knowledge—or so goes the story—is to send up a signal reading "Science desperately needed here." Or, changing the metaphor, philosophy is a cold storage room in which questions are shelved until the sciences get around to handling them. Or, to change the metaphor yet again, philosophers are premature ejaculators who descant too soon, spilling their seminal genius to no effect. Choose your metaphor, the moral of the story is that the history of scientific expansion is the history of philosophical contraction, and the natural progression ends in the elimination of philosophy.

What's wrong with this story? Well, for starters it's internally incoherent. You can't argue for science making philosophy obsolete without indulging in philosophical arguments. You're going to need to argue, for example, for a clear criterion for distinguishing between scientific and non-scientific theories of the world. When pressed for an answer to the so-called demarcation problem, scientists almost automatically reach for the notion of "falsifiability" first proposed by Karl Popper. His profession? Philosophy. But whatever criterion you offer, its defense is going to implicate you in philosophy. Likewise with the unavoidable question—especially for those who argue philosophy's obsolescence—of what it is that we're doing in doing science. Are we offering descriptions of reality and so extending our ontology in discovering the entities and forces utilized in our best scientific theories? Have we learned, as scientific realism would have it, that there are genes and neurons, fermions and bosons, perhaps a multiverse? Or are these theoretical terms not meant to be interpreted as references to things in the world at all but as mere metaphorical gears in the instruments of prediction known as theories? Presumably scientists care about the philosophical question of whether they are actually talking about anything other than observations when they do their science. Even more to the point, the view that science eliminates philosophy requires a philosophical defense of scientific realism. (And if you think not, then that's going to require a philosophical argument.)

A triumphalist scientism needs philosophy to support itself. And the lesson here should be generalized. Philosophy is joined to science in reason's project. Its mandate is to render our views and our attitudes maximally coherent. This involves it in the task of (in Wilfrid Sellars's terms) reconciling the "scientific" and the "manifest" images we have of our being in the world, which also involves philosophy in providing the reasoning that science requires in order to claim its image as descriptive.

Perhaps the old demarcation problem of distinguishing the scientific is misguided. The more important demarcation is distinguishing all that is implicated in and reconcilable with the scientific claims of knowledge. This leads me to hazard a more utopian answer to this year's Edge Question than the one I proposed in the title. What idea should science retire? The idea of "science" itself. Let's retire it in favor of the more inclusive "knowledge." 

scott_atran's picture
Anthropologist; Emeritus Research Director, Centre National de la Recherche Scientifique, Institut Jean Nicod, Paris; Co-Founder, Centre for the Resolution of Intractable Conflict, University of Oxford; Author, Talking to the Enemy
IQ

There is no reason to believe, and much reason not to believe, that the measure of a so-called "Intelligence Quotient" in any way reflects some basic cognitive capacity, or "natural kind" of the human mind. The domain-general measure of IQ is not motivated by any recent discovery of cognitive or developmental psychology. It thoroughly confounds domain-specific abilities—distinct mental capacities for, say, geometrical and spatial reasoning about shapes and positions, mechanical reasoning about mass and motion, taxonomic reasoning about biological kinds, social reasoning about other people's beliefs and desires, and so on—which are the only sorts of cognitive abilities for which an evolutionary account seems plausible in terms of natural selection for task-specific competencies.

Nowhere in the animal or plant kingdoms does there ever appear to have been natural selection for a task-general adaptation. An overall measure of intelligence or mental competence is akin an overall measure for "the body," taking no special account of the various and specific bodily organs and functions, such as hearts, lungs, stomach, circulation, respiration, digestion and so on. A doctor or biologist presented with a single measure for "Body Quotient" (BQ) wouldn't be able to make much of it.

IQ is a general measure of socially acceptable categorization and reasoning skills. IQ tests were designed in behaviorism's heyday, when there was little interest cognitive structure. The scoring system was tooled to generate a normal distribution of scores with a mean of 100 and a standard deviation of 15.

In other societies, a normal distribution of some general measure of social intelligence might look very different, in that some "normal" members of our society could well produce a score that is a standard deviation from "normal" members of another society on that other society's test. For example, in forced-choice tasks East Asian students (China, Korea, Japan) tend to favor field-dependent perception over object-salient perception, thematic reasoning over taxonomic reasoning, and exemplar-based categorization over rule-based categorization.

American students generally prefer the opposite. On tests that measure these various categorization and reasoning skills, East Asians average higher on their preferences and Americans average higher on theirs'. There is nothing particularly revealing about these different distributions other than that they reflect some underlying socio- cultural differences.

There is a long history of acrimonious debate over which, if any, aspects of IQ are heritable. The most compelling studies concern twins raised apart and adoptions. Twin studies rarely have large sample populations. Moreover, they often involve twins separated at birth because a parent dies or cannot afford to support both, and one is given over to be raised by relatives, friends or neighbors. This disallows ruling out the effects of social environment and upbringing in producing convergence among the twins. The chief problem with adoption studies is that the mere fact of adoption reliably increases IQ, regardless of any correlation between the IQs of the children and those of their biological parents. Nobody has the slightest causal account of how or why genes, singly or in combination, might affect IQ. I don't think it's because the problem is too hard, but because IQ is a specious rather natural kind.

sherry_turkle's picture
Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology, MIT; Internet Culture Researcher; Author, The Empathy Diaries

In the early 1980s, I interviewed a young student of Marvin Minsky's, one of the founders of Artificial Intelligence. The student told me that, as he saw it, his hero, Minsky, was trying to build a machine beautiful enough that "a soul would want to live in it." More recently, we are perhaps less metaphysical, more practical. We envisage eldercare-bots, nanny-bots, teacher-bots, sex-bots. To go back to Minsky's student, these days, we're not trying to invent machines that souls would want to live in but that we would want to live with. We are trying to invent machines that a self would want to love.

The dream of the artificial confidante and then love object confuses categories that are best left unmuddled. Human beings have bodies and a life cycle, live in families and have grow up from dependence to independence. This give them experiences of attachment, loss, pain, fear of illness, and of course the experience of death that are specific and that we don't share with machines. To say this does not mean that machines can't get very smart or learn a stunning amount of things, more things certainly than people can know. But they are the wrong object for the job when we want to companionship and love.

A machine companion for instrumental help (to keep one safe in one's home, to help with cleaning or with reaching high sheleves) is an excellent idea. A machine companion for conversation abut human relationships seems a bad one. A conversation about human relationships is species specific. These conversations depend on having the experiences that come from having a human body, human limitations, a human life cycle.

I see us embarked on a voyage of forgetting.

We forget about the care and conversation that can only pass between people. The word conversation derives from words that mean to tend to each other, to lean toward each other. To converse, you have to listen to someone else, to put yourself in their place, to read their body, their voice, their tone, their silences. You bring your concern and experience to bear and you expect the same. A robot that shares information is an excellent project. But if the project is companionship and mutuality of attachment, you want to lean toward a human.

When we think, for example, about giving children robot babysitters, we forget that what makes children thrive is learning that people care for them in a stable and consistent way. When children are with people, they recognize how the movement and meaning of speech, voice, inflection, faces, and bodies flow together. Children learn how human emotions play in layers, seamlessly and fluidly. No robot has this to teach.

There is a general pattern in our discussions of robot companionship: I call it "from better than nothing to better than anything." I hear people begin with the idea that robot companionship is better than nothing, as in "there are no people for these jobs," for example jobs in nursing homes or as babysitters. And then they start to exalt the possibilities of what simulation can offer. In time, people start to talk as though as though what we will get from the artificial might be better than what life could provide. Childcare workers might be abusive. Nurses might make mistakes; nursing home attendants might not be clever or well educated.

The appeal of robotic companions carries our anxieties about people. We see artificial intelligence as a risk-free way to avoid being alone. We fear that we will not be there to care for each other. We are drawn to the robotic because it offers the illusion of companionship without the demands of friendship. Increasingly, people even suggest that it might offer the illusion of love without the demands of intimacy. We are willing to put robots in places where they have no place, not because they belong there but in our disappointments with each other.

For a long time, putting hope in artificial intelligence or robots has expressed an enduring technological optimism, a belief that as things go wrong, science will go right. In a complicated world, robots have always seemed like calling in the cavalry. Robots save lives in war zones; in operating rooms; they can function in deep space, in the desert, in the sea, wherever the human body would be in danger. But in the pursuit of artificial companionship, we are not looking for the feats of the cavalry but the benefits of simple salvations.

What are the simple salvations? These are the hopes that artificial intelligences will be our companions. That talking with us will be their vocation. That we will take comfort in their company and conversation.

In my research over the past fifteen years, I've watched these hopes for the simple salvations persist and grow stronger even though most people don't have experience with a artificial companion at all but with something like Siri, Apple's digital assistant on the iPhone, where the conversation is most likely to be "locate a restaurant" or "locate a friend."

But what my research shows is that even telling Siri to "locate a friend" moves quickly to the fantasy of finding a friend in Siri, something like a best friend, but in some ways better: one you can always talk to, one that will never be angry, one you can never disappoint.

When people talk this way, about friendship without mutuality, about friendship on tap, the simple salvations of artificial companionship don't seem so simple to me. For the idea of artificial companionship to become our new normal, we have to change ourselves, and in the process, we are remaking human values and human connection. We change ourselves even before we make the machines. We think we are making new machines but really we are remaking people.

 

 

 

 

 

patricia_s_churchland's picture
Philosopher and Neuroscientist; Author, Conscience: The Origins of Moral Intuition

The concept of ‘module’ in neuroscience (meaning sufficient for a function, given gas-in-the-tank background conditions) invariably causes more confusion than clarity. The problem is that any neuronal business of any significant complexity is underpinned by spatially distributed networks, and not just incidentally but essentially—and not just cortically, but between cortical and subcortical networks. This is true, for example, of motion perception and pattern recognition, as well as motor control and reinforcement learning, not to mention feelings such as mustering courage to face a threat or deciding to hide instead of run. It is true of self-control and moral judgment. It is likely to be true of conscious experience. The output of a network can vary as the activity of the network’s individual neurons varies. What is poorly understood is how nervous systems solve the coordination problem; i.e. how does the brain orchestrate the right pattern of neuronal activation across networks to get the job done?

This is not all that is amiss with ‘module’. Traditionally, modules are supposed to be encapsulated –aka insulated. But even the degree to which an early sensory area, such as primary visual cortex (V1) is encapsulated has been challenged. Visual neurons in V1 double their firing rate if the animal is running, no matter the identity of the visual input across conditions, to take but one example. To add to the ‘module’ mess, it turns out that specialization in an area such as the V1 appears to be dependent to some nontrivial degree on the statistics of the input. Visual cortex is visual largely because it is connected to the retina and not the cochlea, for example. Notice that in blind subjects, the visual cortex is recruited in reading braille, a high-resolution spatial—and somatosensory—task. As would be expected if specialization depends on the statistics of the network’s input, infant brains have much more plasticity in regional specialization than do mature brains. Doris Trauner and Elizabeth Bates discovered that human infants with a left hemispherectomy can learn language quite normally, whereas an adult who undergoes the same surgery will have severe language deficits. 

I think of ‘module’ in the way I think of ‘nervous breakdown’—mildly useful in the old days when we had no clue about what was going on under the skull, but of doubtful explanatory significance these days. 

peter_woit's picture
Mathematical Physicist, Columbia University; Author, Not Even Wrong

For anyone currently thinking about fundamental physics, this latest Edge question is easy, with an obvious answer: string theory. The idea of unifying physics by positing strings moving in ten space-time dimensions as fundamental entities was born in 1974, and became the dominant paradigm for unification from 1984 on. After 40 years of research and literally tens of thousands of papers, what we've learned is that this is an empty idea. It predicts nothing about anything, since one can get pretty much any physics one wants by appropriately choosing how to to make six of the ten dimensions invisible.

Despite this, proponents of the string theory unification idea refuse to admit what has happened to it, often providing excellent examples of Planck's observation about what happens as scientists grow old while staying true to ideas that should have been discarded. Instead of retiring a failed idea, lately one hears instead that what needs to be retired are conventional ideas about scientific progress. According to string theorists, we live in an obscure corner of a multiverse where anything goes, and this "anything goes" fits right in with string theory, so fundamental physics has reached its end-point.

The "string theory" answer to the 2014 Edge question is however much too simplistic. String theory unification has long been a moribund idea, but it is just part of a much larger circle of now-failed ideas dating from exactly the same time period. These include so-called "grand unification" schemes that propose new forces and particles, generally invoking a new "supersymmetry" that relates known forces and particles to unseen "superpartners". Besides finding the predicted Higgs particle, the other great discovery of the LHC has been that the superpartners predicted by many theorists aren't there.

The period around 1974 brought us not only string theory, grand unification, and supersymmetry, but also something called the "naturalness" argument. The idea here is that our best model of particle physics, the Standard Model, is just an "effective theory", an approximation valid only at observable distance scales. Ken Wilson taught us how to use the "renormalization group" to not only extrapolate the behavior of a theory to short distances we can't observe, but also how to run this backwards, finding an effective theory for a fundamental theory defined at unobservably small distances. In a technical sense, "natural" theories are the ones where what we see is insensitive to the details of what happens at short distances. "Naturalness" became part of the speculative picture born in the 1974-era: complicated new physics involving unobserved strings and superpartners could be postulated at very short distances, with a "natural" theory all that is visible to us. In this picture it is technical "naturalness" which ensures that we can't see any of the complexities introduced by unobservably small strings or superpartners.

Wilson was among the first to point out that the Standard Model is mostly "natural", but not entirely so due to the behavior of the Higgs particle. At first he argued that this meant that at LHC energies we should see not the Higgs, but something different. Fans of superpartners argued that such particles had to exist at roughly the same energy as the Higgs, since if so, they could be used to cancel the "unnaturalness". Long before the LHC turned on, Wilson had retracted this argument as a blunder, deciding there was no good reason not to see an "unnatural" Higgs. The sensitivity of its behavior to what happens at very short distances is not a good argument against it, since we simply don't know what is going on at such short distances.

The observation at the LHC of the Higgs, but no superpartners, has caused great consternation among theorists. Something has happened that should not have been possible according to the forty-year-old reasoning now well-embedded in textbooks. Arguments are being made that this is yet more evidence for the multiverse. In this "anthropic" view, anything goes at short distances for bubble-universes elsewhere in the multiverse, but we see something "unnaturally" simple in our bubble-universe because otherwise we wouldn't be here. The rise of such reasoning shows that sending the "naturalness" argument into retirement (along with the epicyclic complexity of strings and superpartners) is now something long overdue

gerald_smallberg's picture
Practicing Neurologist, New York City; Playwright, Off-Off Broadway Productions, Charter Members; The Gold Ring

The Law of Parsimony, also known as Occam's razor, does not warrant a funeral but it does have some problems in its description of reality. This law states that the most simple of two competing theories should be the preferred one, and that entities should not be multiplied needlessly. It maintains a lofty stature in philosophy and science and is often utilized as a literary device. Using the Law of Parsimony is the essence of good detective fiction perhaps none better achieved than by Arthur Conan Doyle, a physician, who sharpened to perfection Occam's razor in the reasoning employed by his renowned creation, Sherlock Holmes. One of Holmes' most noted rules is that "when you have eliminated the impossible, whatever remains, however improbable, must be the truth." 

As an absolute, the Law of Parsimony is floundering. Not because it is aging poorly, but rather because it is being challenged more and more by the complexity of the real world and its need for a valid counterweight. From my vantage point as a physician in the practice of clinical neurology, its usefulness, which has always been a guiding principle for me, can easily lead to blind spots and errors in judgment when rigidly followed.

A recent case in point is a 79 year- old woman who was complaining of difficulty with her balance with several recent falls. This could be dismissed to age. However, she has multiple other factors in her history that need to be taken into account including a diabetic neuropathy making her feet lose sensation as well as compression of her cervical spinal cord producing weakness in her legs. She also has a hearing problem with a long history of intermittent vertigo. In addition, she is of Scandinavian descent, making her somewhat more prone genetically to vitamin B12 deficiency secondary to poor absorption that in her case may be exacerbated by medications used to inhibit acid reflux. This vitamin deficiency by itself can produce neuropathy and degeneration of her spinal cord. It is in this complicated clinical setting that the Law of Parsimony utterly fails and I doubt that even the great Holmes, who has the luxury of being a fictional character, could tie all of these loose ends into a simple knot. To provide the appropriate care to this patient, I needed to utilize Hickam's Dictum, the medical profession's counterargument to Occam's razor. This maxim of Dr. John Hickam, who died in 1970, states very simply "a patient can have as many diagnoses as [she] damn well pleases."

The crucial role that the Law of Parsimony renders in how we reason is beyond question. This law dates back to the Greek philosophers who refined it from their antecedents since I suspect we evolved to seek simplicity over complexity. The desire for unity and singleness is satisfying and very seductive. However, at times it needs to be challenged by Hickam's Dictum, which is a variation of the Principle of Plenitude. This view of reality also dates to ancient Greek philosophy, which postulates that if the universe is to be as perfect as possible it must be as full as possible, in the sense that it contains as many kinds of things as it possibly could contain.

With the complexity, inconsistency, ambiguity and ultimate uncertainty that define our reality, we should not limit ourselves to using only one or the other of these valuable tools of analysis. We need to be more willing to have our own positions challenged, striving to keep an open mind to other arguments, other viewpoints and conflicting data. In order to make the best decisions for the best reasons, we must choose the appropriate heuristics coupled with intellectual honesty to guide our thinking as we grapple with the cunning machinations of the world we inhabit.

charles_seife's picture
Professor of Journalism, New York University; Former Journalist, Science Magazine; Author, Hawking Hawking

It's a boon for the mediocre and for the credulous, for the dishonest and for the merely incompetent. It turns a meaningless result into something publishable, transforms a waste of time and effort into the raw fuel of scientific careers. It was designed to help researchers distinguish a real effect from a statistical fluke, but it has become a quantitative justification for dressing nonsense up in the mantle of respectability. And it's the single biggest reason that most of the scientific and medical literature isn't worth the paper it's written on.

When used correctly, the concept of statistical significance is a measure to rule out the vagaries of chance, nothing more, nothing less. Say, for example, you are testing the effectiveness of a drug. Even if the compound is completely inert, there's a very good chance (roughly 50%, in fact) that patients will respond better to your drug than to a placebo. Randomness alone might imbue your drug with seeming efficacy. But the more marked the difference between the drug and the placebo, the less likely it is that randomness alone is responsible. A "statistically significant" result is one that has passed an arbitrary threshold. In most social science journals and the medical literature, an observation is typically considered statistically significant if there's less than a five percent chance that pure randomness can account for the effect that you're seeing. In physics, the threshold is usually lower, often 0.3% (three sigma) or even 0.00003% (five sigma). But the essential dictum is the same: if your result is striking enough so that it passes that threshold, it is given a weighty label: statistically significant.

Most of the time, though, it isn't used correctly. If you look at a typical paper published in the peer-reviewed literature, you'll see that never is just a single observation tested for statistical significance, but instead handfuls, or dozens, or even a hundred or more. A researcher looking at a painkiller for arthritis sufferers will look at data to answer question after question: does the drug help a patient's pain? With knee pain? With back pain? With elbow pain? With severe pain? With moderate pain? With moderate to severe pain? Does it help a patient's range of motion? Quality of life? Each one of these questions is tested for statistical significance, and, typically, gauged against the industry-standard 5% rule. That is, there's a five percent chance—one in twenty—that randomness will make a worthless drug seem like it has an effect. But test ten questions, and there's a 40% chance that randomness, will, indeed, deceive you when answering one or more of these questions. And the typical paper asks more than ten questions, often many more. It's possible to correct for this "multiple comparisons" problem mathematically (though it's not the norm to do so.) It's also possible to fight this effect by committing to answer just one main question (though, in practice, such "primary outcomes" are surprisingly malleable.) But even these corrections often can't take into account numerous effects that can undermine a researcher's calculations, such as how subtle changes in data classification can affect outcomes (is "severe" pain a 7 or above on a 10-point scale, or is it an 8 or above?) Sometimes these issues are overlooked; sometimes they're deliberately ignored or even manipulated.

In the best-case scenario, when statistical significance is calculated correctly, it doesn't tell you much. Sure, chance alone is (relatively) unlikely to be responsible for your observation. But it doesn't reveal anything about whether the protocol was set up correctly, whether a machine's calibration was off, whether a computer code was buggy, whether the experimenter properly blinded the data to prevent bias, whether the scientists truly understood all the possible sources of false signals, whether the glassware was properly sterilized, and so forth and so on. When an experiment fails, it's more than likely that the blame doesn't rest on randomness—on statistical flukes—but instead on a good old-fashioned screwup somewhere.

When physicists at CERN claimed to have spotted neutrinos moving faster than light, a six-sigma level of statistical significance (and an exhaustive check for errors) wasn't enough to convince smart physicists that the CERN team had messed up somehow. The result clashed not only with physical law, but with observations of neutrinos coming from supernova explosions. Sure enough, a few months later, the flaw (a subtle one) finally emerged, negating the team's conclusion.

Screwups are surprisingly common in science. Consider, for example, the fact that the FDA inspects a few hundred clinical laboratories each year. Roughly 5% of inspections come back with findings that the laboratory is engaged "significant objectionable conditions and practices" so egregious that its data are considered unreliable. Often these practices include outright fraud. Those are just the blindingly obvious problems visible to an inspector; it would be hard to imagine that the real number of lab screwups aren't double or triple or quintuple that. What value is it to call something statistically significant at the 5% or 0.3% or even 0.00003% level if there's a 10% or 25% (or more) chance that the data is gravely undermined by a laboratory error? In this context, even the most iron-clad findings of statistical validity lose their meaning when dwarfed by the specter of error or, worse yet, fraud.

Nevertheless, even though statisticians warn against the practice, it's all too common for a one-size-fits-all finding of statistical significance to be taken as a shortcut to determine if an observation is credible—whether a finding is "publishable." As a consequence, the peer-reviewed literature is littered with statistically significant findings that are irreproducible and implausible, absurd observations with effect sizes orders of magnitude beyond what is even marginally believable.

The concept of "statistical significance" has become a quantitative crutch for the essentially qualitative process of whether or not to take a study seriously. Science would be much better off without it. 

robert_kurzban's picture
Psychologist, UPenn; Director, Penn Laboratory for Experimental Evolutionary Psychology (PLEEP); Author, Why Everyone (Else) is a Hypocrite

In the 17th century, René Descartes proposed that the nervous system worked a bit like the nifty statues in the royal gardens of Saint-Germain, whose moving parts were animated by water that ran through pipes inside of them. Descartes' idea is illustrated in the well-known line drawing that appears in many introductory psychology textbooks that shows a person puzzlingly sticking his foot in a fire, presumably to illustrate Descartes' idea about hydraulic reflexes.

Three centuries on, in the mid-1900's, the detritus of the hydraulic conception of behavior, now known to be luminously wrong, was strewn about here and there. In the scholarly literature, for instance, there were traces in Freud's corpus—catharsis will relieve all that pressure. Among the Folk, hydraulic metaphors were—and still are—used to express mental states. I'm going to blow my top. Having written an essay for Edge today, I feel drained.

There is, to be sure, still plenty of debate about how the mind works. No doubt even on the pages of this year's Question there will be spirited discussion about how well the brain-as-device-that-computes notion is doing to advance psychology. Still, while the computational theory of mind might not have won over everyone, the hydraulic model Descartes proposed is dead and buried.

Well, dead anyway. Buried… maybe not. (And, to be sure, hydraulics is, as it turned out, the right explanation for a pretty important (male) biological function; just not the one Descartes had in mind.) The metaphors that recruit the intuition that the mind is built of fluid-filled pipes, along with junctions, valves, and reservoirs, point to the possibility that Descartes was drawn to the notion of a hydraulic mind not only because of the technology of the day, but also because there is something intuitively compelling about the idea.

And, indeed, Cartesian hydraulics has been revived in at least one incarnation in the scholarly literature, though I doubt it's the only one. For the last decade or so, some researchers have been advancing the notion that there is a "reservoir" of willpower. You can't have an empty reservoir, the theory goes, in order to exert self-control—resisting eating marshmallows, avoiding distractions, etc.—and as the reservoir gets drained, it become harder and harder to exert self-control.

Given how wrong Descartes was about how the mind works, it's pretty clear that this sort of idea just can't be right. There have recently been a number of experimental results that disconfirm predictions made by the model, but that's not why the idea should be abandoned. Or, at least, the data aren't the best reason the idea should be abandoned. The reason the idea should be left to die is the same reason that Descartes' idea should be: Although the mind might not work just like a digital computer—no doubt the mind is different from your basic PC in any number of important ways –we do know that computation of some sort is much, much more likely to be a good explanation for human behavior than hydraulics.

People will disagree about whether Planck was right about the speed of scientific change. Psychology, I would argue, has a couple of handicaps that might make the discipline more susceptible to Planck's worries than some other disciplines.

First, theories in psychology are often driven by—indeed, held captive by—our intuitions. I'm fond of the way that Dan Dennett put it in 1991 when he was talking about the (also luminously wrong) idea of the Cartesian Theater, the dualist idea that there is a "special center in the brain," the epicenter of identity, the One and True Me, the wizard behind the curtain. He thought this notion was "the most tenacious bad idea bedeviling our attempts to think about consciousness." Human intuitions tell us that there's a special "me" in there somewhere, an intuition that serves to resurrect the idea of a special center over and over again.

Second, psychologists are too polite with each other's ideas. (Economists, for example, in my experience, don't frequently commit this particular sin.) In 2013, a prominent journal in psychology published a paper that reported the results of attempts to replicate a previously published finding. The title of the article was, before the colon, the phenomenon in question and then, after the colon: "Real or Elusive Phenomenon?" The pairing of real versus elusive as opposed to nonexistent highlights that it's considered so rude to suggest that a result was a false positive—as opposed to something that's simply hard to replicate—that people in the field won't even say out loud that prior work might have been pointing to something that isn't, really, there.

Of course intuitions interfere with theoretical innovation in other disciplines. No doubt the obviousness of the sun going around the Earth, bending across the sky each day, delayed acceptance of the heliocentric model. Everyone knows the mind isn't a hydraulic shovel, but it does feel like some sort of reservoir of stuff gets used up just as it does feel like the sun is moving while we stay put.

Still, it's time that Cartesian hydrolicism be put to rest in the same way that Cartesian dualism was. 

c_sar_hidalgo's picture
Associate Professor, MIT Media Lab; Author, Why Information Grows

Economic growth is one of those concepts that nobody wants to contradict. Even its detractors cannot avoid using it. They talk about green growth, sustainable growth, and in the most extreme cases, they talk about de-growth.

Yet economic growth, as a concept and a reality, is recent. Modern measures of economic growth are less than a century old, dating back to the invention of GDP by Simon Kuznets in the 1930's. Also, economists mostly agree that economies did not grow before the 19th century, so economic growth—as a phenomenon—is also recent.

As many others, I believe the idea of economic growth is now ready for retirement. The question that lingers, therefore, is what will replace it, since economic growth will leave a void in public speech, as both a staple paragraph of political campaigns and a recurrent topic in newsmedia.

But economic growth cannot last forever. If the GDP per capita of the U.S. grew, in real terms, at a modest rate of 1% for the next millennia, the average American would be making a whopping 1.1 billion dollars a year by the year 3,014.

A more reasonable interpretation of this number is to think that the growth we have seen during the last century was part of an S-shaped curve, a phase-transition. This means that growth will either peter out during this millennium, or that we are measuring the wrong thing. Either way, we can conclude that the idea of economic growth is on its way out.

 

david_g_myers's picture
Professor of Psychology, Hope College; Co-author, Psychology, 11th Edition

In today's Freud-influenced popular psychology, repression remains big. People presume, for example, that unburying repressed traumas is therapeutic. But do we routinely exile our painful memories? "Traumatic memories are often repressed," agree 4 in 5 undergraduates and members of the American and British general publics (in recent surveys reported by a University of California, Irvine research team).

Actually, say today's memory researchers, there is little evidence of such repression, and much evidence of its opposite. Traumatic experiences (even witnessing a loved one's murder, being terrorized by a hijacker or a rapist, losing everything in a natural disaster) rarely get banished into the unconscious, like a ghost in a closet. Traumas more commonly get etched on the mind as persistent, haunting memories. Moreover, extreme stress and its associated hormones enhance memory, producing unwanted flashbacks that plague survivors. "You see the babies," said one Holocaust survivor. "You see the screaming mothers. . . . It's something you don't forget."

The scientist-therapist "memory war" lingers, but it is subsiding. Today's psychological scientists appreciate the enormity of unconscious, automatic information processing, even as mainstream therapists and clinical psychologists report increasing skepticism of repressed and recovered memories.

roger_schank's picture
CEO, Socratic Arts Inc.; John Evans Professor Emeritus of Computer Science, Psychology and Education, Northwestern University; Author, Make School Meaningful-And Fun!

It was always a terrible name, but it was also a bad idea. Bad ideas come and go but this particular idea, that we would build machines that are just like people, has captivated popular culture for a long time. Nearly every year, a new movie with a new kind of robot that is just like a person appears in the movies or in fiction. But that robot will never appear in reality. It is not that Artificial Intelligence has failed, no one actually ever tried. (There I have said it.)

David Deutsch, a physicist at Oxford said: "No brain on Earth is yet close to knowing what brains do. The enterprise of achieving it artificially — the field of 'artificial intelligence' has made no progress whatever during the entire six decades of its existence." He adds that he thinks machines that think like people will happen some day.

Let me put that remark a different light. Will we eventually have machines that feel emotions like people? When that question is asked of someone in AI, they might respond about how we could get a computer to laugh or to cry or to be angry. But actually feeling?

Or let's talk about learning. A computer can learn can't it? That is Artificial Intelligence right there. No machine would be smart if it couldn't learn, but does the fact that Machine Learning has enabled the creation of a computer that can play Jeopardy or provide data about purchasing habits of consumers mean that AI is on its way?

The fact is that the name AI made outsiders to AI imagine goals for AI that AI never had. The founders of AI (with the exception of Marvin Minsky) were obsessed with chess playing, and problem solving (the Tower of Hanoi problem was a big one.) A machine that plays chess well does just that, it isn't thinking nor is it smart. It is certainly isn't acting like a human. The chess playing computer won't play worse one day because it drank too much the night before or had a fight with its wife.

Why does this matter? Because a field that started out with a goal different from what its goal was perceived to be is headed for trouble. The founders of AI, and those who work on AI still (me included), want to make computers do things they cannot now do in the hope that something will be learned from this effort or that something will have been created that is of use. A computer that can hold an intelligent conversation with you would be potentially useful. I am working on a program now that will hold an intelligent conversation about medical issues with a user. Is my program intelligent? No. The program has no self knowledge. It doesn't know what it is saying and it doesn't know what it knows. The fact that we have stuck ourselves with this silly idea of intelligent machines or AI causes people to misperceive the real issues.

I declare Artificial Intelligence dead. The field should be renamed " the attempt to get computers to do really cool stuff" but of course it won't be. You will never have a friendly household robot with whom you can have deep meaningful conversations. I happened to be a judge at this year's Turing Test (known as the Loebner Prize.) The stupid stuff that was supposed to be AI was just that, stupid. It took maybe 30 seconds to figure which was a human and which was a computer.

People do not just get fed knowledge. I have raised a couple of people myself. I fed them food, not knowledge. I answered their questions, but they were their own self-generated questions. I tried to help them get want they wanted, but it was (and is) their own deeply felt wants that I was dealing with. Humans are born with individual personalties and their own set of wants and needs and they express them early on. No computer starts out knowing nothing and gradually improves by interacting with people. We always kick that Idea around when we talk about AI, but no one ever does it because it really isn't possible. Nor should it be the goal of the field formerly known as AI. The goal should be figuring out what great stuff people do and seeing if machines can do bits and pieces of that. A chess playing computer is nice to have I suppose, but it won't tell you much about how people think nor will it suddenly get interested in learning a new game to play because it is bored with chess.

There really is no need to create artificial humans anyway. We have enough real ones already.

paul_j_steinhardt's picture
Albert Einstein Professor in Science, Departments of Physics and Astrophysical Sciences, Princeton University; Coauthor, Endless Universe

A pervasive idea in fundamental physics and cosmology that should be retired: the notion that we live in a multiverse in which the laws of physics and the properties of the cosmos vary randomly from one patch of space to another. According to this view, the laws and properties within our observable universe cannot be explained or predicted because they are set by chance. Different regions of space too distant to ever be observed have different laws and properties, according to this picture. Over the entire multiverse, there are infinitely many distinct patches. Among these patches, in the words of Alan Guth, "anything that can happen will happen—and it will happen infinitely many times." Hence, I refer to this concept as a Theory of Anything.

Any observation or combination of observations is consistent with a Theory of Anything. No observation or combination of observations can disprove it. Proponents seem to revel in the fact that the Theory cannot be falsified. The rest of the scientific community should be up in arms since an unfalsifiable idea lies beyond the bounds of normal science. Yet, except for a few voices, there has been surprising complacency and, in some cases, grudging acceptance of a Theory of Anything as a logical possibility. The scientific journals are full of papers treating the Theory of Anything seriously. What is going on?

Have experiments revealed that our observable universe and the fundamental laws are too complicated to be explained by normal science? Absolutely not! Quite the opposite! On the macroscopic scale, the latest measurements show our observable universe to be remarkably simple, described by very few parameters, obeying the same physical laws throughout and exhibiting remarkably uniform structure in all directions. On the microscopic scale, the Large Hadron Collider at CERN (European Center for Nuclear Research) has revealed the existence of the Higgs, in accord with what theorists had predicted nearly 50 years ago based on sound scientific reasoning.

A simple outcome calls for a simple explanation for why it had to be so. Why, then, consider a Theory of Anything that allows any possibility, including complicated ones? The motivation is the failure of two favorite theoretical ideas—inflationary cosmology and string theory. Both were thought to produce a unique outcome. Inflationary cosmology was invented to transform the entire cosmos into a smooth universe populated by a scale-invariant distribution of hot spots and cold spots, just as we observe it to be. String theory was supposed to explain why elementary particles could only have the precise masses and forces that they do. After more than 30 years investment in each of these ideas, theorists have found that they are not able to achieve these ambitious goals. Inflation, once started, runs eternally and produces a multiverse of pockets whose properties vary over every conceivable possibility—flat and non-flat; smooth and non-smooth; scale-invariant and not scale-invariant; etc. Despite laudable efforts by many theorists to save the theory, there is no solid reason known today why inflation should cause our observable universe to be in a pocket with the smoothness and other very simple properties we observe. A continuum of other conditions is equally possible.

In string theory, a similar explosion of possibilities has occurred, driven by attempts to explain the 1998 discovery of the accelerated expansion of the universe. The acceleration is thought to be due to positive vacuum energy, an energy associated with empty space. Instead of predicting a unique possibility for the vacuum state of the universe and particles and fields that inhabit it, our current understanding of string theory is that there is a complex landscape of vacuum states corresponding to exponentially different kinds of particles and different physical laws. The set of vacuum space contains so many possibilities that, surely, it is claimed, one will include the right amount of vacuum energy and the right kinds of particles and fields. Mix the inflation and string theory, and the unpredictability multiplies. Now every combination of macrophysical and microphysical possibilities can occur.

I suspect that the theories would never have gained the acceptance they have if these problems had been broadly recognized at the outset. Historically, if a theory failed to achieve its goals, it was improved or retired. In this case, though, the commitment to the theories has become so strong that some prominent proponents have seriously advocated moving the goalposts. They say that we should be prepared to abandon the old-fashioned idea that scientific theories should give definite predictions and to accept a Theory of Anything as the best that can ever be achieved.

I draw the line there. Science is useful insofar as it explains and predicts why things are the way they are and not some other way. The worth of a scientific theory is gauged by the number of do-or-die experimental tests it passes. A Theory of Anything is useless because it does not rule out any possibility and worthless because it submits to no do-or-die tests. (Many papers discuss potential observable consequences, but these are only possibilities, not certainties, so the Theory is never really put at risk.)

A priority for theorists today is to determine if inflation and string theory can be saved from devolving into a Theory of Anything and, if not, seek new ideas to replace them. Because an unfalsifiable Theory of Anything creates unfair competition for real scientific theories, leaders in the field can play an important role by speaking out—making it clear that Anything is not acceptable—to encourage talented young scientists to rise up and meet the challenge. The sooner we can retire the Theory of Anything, the sooner this important science can progress. 

peter_richerson's picture
Distinguished Professor Emeritus, University of California-Davis; Visiting Professor, Institute of Archaeology, University College London

The concept of human nature has considerable currency among evolutionists who are interested in humans. Yet when examined closely it is vacuous. Worse, it confuses the thought processes of those who attempt to use it. Useful concepts are those that cut nature at its joints. Human nature smashes bones.

Human nature implies that our species is characterized by common core of features that define us. Evolutionary biology teaches us that this sort of essentialist concept of species is wrong. A species is an assemblage of variable individuals, albeit individuals who are sufficiently genetically similar that they can successful interbreed. Most species share most of their genes with ancestral and related species, as we do with other apes. In most species, ample genetic variation ensures that no two individuals are genetically identical. Many species contain geographically structured genetic variation, as the modern humans do. A few tens of thousands of years ago, our genus seemed to have comprised of at a couple of African "species" and three Eurasian ones, all of which interbred enough to leave traces in living genomes. Most species, and the populations of which they are composed, are relentlessly evolving. The human populations that have adopted agriculture in the Holocene have undergone a wave of genetic changes to adapt to a diet rich in starchy staples other agricultural products, and to an environment rich in epidemic pathogens taking advantage of dense, settled human populations. Some contemporary human populations today are subject to new selective pressures owing to "diseases of abundance." The evolution of resistance to such diseases is detectable. Some geneticists argue that genes affecting our behavior have come under recent selection to adapt to life in complex societies.

The concept of human nature causes people to look for explanations under the wrong rock. Take the most famous human nature argument: are people by nature good or evil? In recent years, experimentalists have conducted tragedy of the commons games and observed how people solve the tragedy (if they do). A common finding is that roughly a third of participants act as selfless leaders, using whatever tools the experimenters make available to solve the dilemma of cooperation, roughly a tenth are selfish exploiters of any cooperation that arises, and the balance are guarded cooperators with flexible morals. This result comports with everyone's personal experience, some people are routinely honest and generous, a few are downright psychopathic, and many people fall somewhere in between. Human society would be entirely different if this were not so. The human nature debate on the topic was sterile because it did not attend to something we all know if we stop to think about it.

Darwin's great contribution to biology was to abandon essentialism and focus on variation and its transmission. He made remarkable progress even though organic inheritance was a black box in his day. He also got the main problem of human variability right. In the Descent of Man, he argued that humans were biologically a rather ordinary species with a rather ordinary amount of geographical variation. Yet, in many ways, the amount human behavioral variation is far outside the range of other species. The Fuegans adapted to a hunting and gathering life on the Straits of Magellan were sharply different from a leisured gentleman naturalist from Shrewsbury. But these differences mainly owe to different customs and traditions, not mainly to organic differences. He also realized that the evolution of traditions responded to selective processes other than natural selection. Traditions are shaped by human choices a little like the artificial selection of domesticates, with natural selection playing a subordinate role.

In his Sketch on an infant Darwin described how readily children learn from their caregivers. The inheritance of traditions, customs, and language is relatively easy to observe with the tools of a 19th Century naturalist compared to intricacies of genetic inheritance, which is still yielding fundamental secrets to the high tech tools of molecular biology. Recent work on the mechanisms underlying imitation and teaching has begun to reveal the more deeply hidden cognitive components of these processes and the results underpin Darwin's phenomenological account of tradition acquisition and evolution.

In no field is the deficiency of the human nature concept better illustrated than in its use to try to understand learning, culture and cultural evolution. Human nature thinking leads to the conclusion that causes of behavior can be divided into nature and nurture. Nature is conceived of as causally prior to nurture both in evolutionary and developmental time. What evolves is nature and cultural variation, whatever it is, has to the causal handmaiden of nature. This is simply counterfactual. If the dim window stone tools give us does not lie, culture and cultural variation have been fundamental adaptations of our lineage perhaps going back to late australopiths. The elaboration of technology over the last two million years has roughly paralleled the evolution of larger brains and other anatomical changes. We have clear examples of cultural changes driving genetic evolution, such as the evolution of dairying driving the evolution of adult lactase persistence. Socially learned technology could have been doing similar things all throughout the last 2 million years. The human capacity for social learning develops so early in the first year of life that developmentalists have had to design very clever experiments to probe what infants are learning months before language and precise imitative behavior exist. At least from 12 months onward social learning begins to transmit the discoveries of cultures to children with every opportunity for these discoveries to interact with gene expression. In autistic children, this social learning mechanism is more or less severely compromised, leading to more or less severely "developmentally disabled" adults.

Human culture is best conceived of as a part of human biology, like our bipedallocomotion. It is a source of variation that we have used to adapt to most of the world's terrestrial and amphibious habitats. Using the human nature concept, like essentialism more generally, makes it impossible think straight about human evolution. 

helen_fisher's picture
Biological Anthropologist, Rutgers University; Author, Why Him? Why Her? How to Find and Keep Lasting Love

If an idea is not absurd, there is no hope for it," Einstein reportedly said. I would like to broaden the definition of addiction and retire the scientific idea that all addictions are pathological and harmful.

Since the beginning of formal diagnostics over 50 years ago, the compulsive pursuit of gambling, food, and sex (known as non-substance rewards) have not been regarded as addictions; only abuse of alcohol, opioids, cocaine, amphetamines, cannabis, heroin and nicotine have been formally regarded as addictions. This categorization rests largely on the fact that substances activate basic "reward pathways" in the brain associated with craving and obsession, and produce pathological behaviors. Psychiatrists work within this world of psychopathology—that which is abnormal and makes you ill.

As an anthropologist, I feel they are limited by this view. Scientists have now shown that food, sex and gambling compulsions employ many of the same brain pathways activated by substance abuse. Indeed, the 2013 edition of the Diagnostic and Statistical Manual of Mental Disorders (the DSM) has finally acknowledged that at least one form of non-substance abuse can be regarded as an addiction: gambling. The abuse of sex and food have not yet been included. Neither has romantic love. I shall propose that love addiction is just as real as any other addiction, in terms of its behavior patterns and brain mechanisms. Moreover, it's often a positive addiction.

Scientists and laymen have long regarded romantic love as part of the supernatural, or as a social invention of the Troubadours in 12th century France. Evidence does not support these notions. Love songs, poems, stories, operas, ballets, novels, myths and legends, love magic, love charms, love suicides and homicides: evidence of romantic love has now been found in over 200 societies ranging over thousands of years. Around the world men and women pine for love, live for love, kill for love and die for love. Human romantic love, also known as passionate love or "being in love" is regularly regarded as a human universal.

Moreover, love-besotted men and women show all of the basic symptoms of addiction. Foremost, the lover is stiletto-focused on his/her drug of choice: the love object. They think obsessively about "him" or "her" (intrusive thinking), and often compulsively call, write or appear to stay in touch. Paramount to this experience is intense motivation to win their sweetheart, not unlike the substance abuser fixated on his/her drug. Impassioned lovers also distort reality, change their priorities and daily habits to accommodate the beloved, experience personality changes (affect disturbance), and sometimes do inappropriate or risky things to impress this special other. Many are willing to sacrifice, even die for "him" or "her." The lover craves emotional and physical union with their beloved, too (dependence). And like the addict who suffers when they can't get their drug, the lover suffers when apart from the beloved (separation anxiety). Adversity and social barriers even heighten this longing (frustration attraction).

In fact, besotted lovers express all four of the basic traits of addiction: craving; tolerance; withdrawal; and relapse. They feel a "rush" of exhilaration when with their beloved (intoxication). As their tolerance builds, the lover seeks to interact with the beloved more and more (intensification). If the love object breaks off the relationship, the lover experiences signs of drug withdrawal, including protest, crying spells, lethargy, anxiety, insomnia or hypersomnia, loss of appetite or binge eating, irritability and loneliness. Lovers, like addicts, also often go to extremes, sometimes doing degrading or physically dangerous things to win back the beloved. And lovers relapse the way drug addicts do: long after the relationship is over, events, people, places, songs or other external cues associated with their abandoning sweetheart can trigger memories and renewed craving.

Of the many indications that romantic love is an addiction, however, perhaps none is more convincing than the growing data from neuroscience. Using brain scanning (functional magnetic resonance imaging, or fMRI), several scientists have now shown that feelings of intense romantic love engage regions of the brain's "reward system," specifically dopamine pathways associated with energy, focus, motivation, ecstasy, despair and craving--including primary regions associated with substance (and non-substance) addictions. In fact, our group has found activity in the nucleus accumbens—the core brain factory associated with all addictions—in our rejected lovers. Moreover, some of our newest (unpublished) results suggest correlations between activities of the nucleus accumbens and feelings of romantic passion among lovers who are wildly, happily in love.

Nobel laureate Eric Kandel recently said, "Brain studies will ultimately tell us what it is like to be human." Knowing what we now know about the brain, my brain-scanning partner, Lucy Brown, has suggested that romantic love is a natural addiction; and I have maintained that this natural addiction evolved from mammalian antecedents some 4.4 million years ago among our first hominid ancestors, in conjunction with the evolution of (serial, social) monogamy—a hallmark of humankind. Its purpose: to motivate our forebears to focus their mating time and metabolic energy on a single partner at a time, thus initiating the formation of a pair-bond to rear their young (at least through infancy) together as a team.

The sooner we embrace what brain science is telling us—and use this information to upgrade the concept of addiction—the better we will understand ourselves and all the billions of others on this planet who revel in the ecstasy and struggle with the sorrow of this profoundly powerful, natural, often positive addiction: romantic love.

 

abigail_marsh's picture
Associate Professor of Psychology, Georgetown University

The scientific studies of mental illness and antisocial behavior continue to occupy largely separate intellectual domains. Although some patterns of persistent antisocial behavior are nominally accorded diagnostic labels such as Antisocial Personality Disorder or Conduct Disorder, the default approach to individuals who engage in persistent antisocial behavior is to view their patterns of behavior through a moral lens (as "badness") rather than through a mental health lens (as "madness").

In some senses this distinction represents progress. As recently as the 19th and early 20th century, individuals affected by all manner of psychopathologies were routinely confined and in some cases punished or even executed. Along with the emergence of the understanding that symptoms of mental illness reflect disease processes, the emphasis has shifted toward a focus on prevention and treatment. However, this shift has not applied equally to all forms of psychopathology. For example, disorders primarily characterized by internalizing symptoms (persistent distress or fear, self-injuring behaviors) versus externalizing symptoms (persistent anger or hostility, antisocial and aggressive behaviors) are strikingly similar in many respects: comparable prevalence; parallel etiologies and risk factors; and similarly detrimental effects on social, educational, and vocational outcomes. But whereas immense scientific resources are aimed at identifying the causes and disease processes of internalizing symptoms and developing therapies for them, the emphasis for externalizing symptoms remains primarily on confinement and punishment, with relatively few resources devoted to identifying causes and disease processes or developing therapies. Comparisons of federal mental health funding, clinical trials, available therapeutic agents, and publications in biomedical journals directed toward internalizing versus externalizing symptoms all confirm this pattern. It is likely that this asymmetry results from multiple forces, including cognitive and cultural biases that influence decision-making processes among scientists and policymakers alike and ultimately erode support for the study of antisociality as a form of mental illness.

Cognitive biases include widespread tendencies to view actions that cause harm to others as fundamentally more intentional and blameworthy than identical actions that happen not to result in harm to others, as has been shown by Joshua Knobe and others in investigations of the "side-effect effect", and to view agents who cause harm as fundamentally more capable of intentional and goal-directed behavior than those who incur harm, as has been shown by Kurt Gray and others in investigations of distinction between moral agents and moral patients. These biases dictate that an individual who is predisposed to behavior that harms others as a result of genetic and environmental risk factors will be inherently viewed as more responsible for his or her behaviors than another individual predisposed to behavior that harms himself as a result of similar genetic and environmental risk factors. The tendency to view those who harm others as responsible for their actions, and thus blameworthy, may reflect seemingly evolved tendencies to reinforce social norms by blaming and punishing wrongdoers for their misbehavior.

Related to these cognitive biases are cultural biases that dictate self-interested behavior to be normative. Individualistic cultures view self-interest as humans' cardinal motive—the motive that supersedes all other motives and that ultimately underlies all human behavior. This norm may reflect the dominance of rational choice theories of human behavior favored in economics and which have many adherents among scholars in other academic domains, including psychologists, biologists, and philosophers. Belief in the norm of self-interest is widespread among the lay public as well. The norm of self-interest renders behavior that is not self-interested inherently non-normative—or "abnormal." This may explain the tendency to view behaviors and patterns of thinking that cause oneself harm or distress as clearly reflecting irrationality and mental illness whereas otherwise similar behaviors and patterns of thinking that cause others harm or distress are viewed as reflecting rational, if immoral, choices. Indeed, if the harm to others is in the service of achieving benefit for the self, such behaviors may even be seen as hyper-rational.

The United States is an unusually individualistic country, which may help to explain its unusually strong adherence to the norm of self-interest, and also perhaps its unusually punitive (rather than treatment-focused) approach to crime and aggression. This approach can be contrasted with that of, for example, the relatively less individualistic Scandinavian nations where treatment rather than punishment of even serious criminal offenders is emphasized. Mental health-focused approaches may reduce recidivism, further supporting the possibility that externalizing behaviors, including crime and aggression, may be most effectively considered symptoms of psychopathology in need of treatment rather than simple failures of impulse control in need of punishment—that the distinction between antisociality and mental illness should be abandoned.

nassim_nicholas_taleb's picture
Distinguished Professor of Risk Engineering, New York University School of Engineering ; Author, Incerto (Antifragile, The Black Swan...)

The notion of standard deviation has confused hordes of scientists; it is time to retire it from common use and replace it with the more effective one of mean deviation. Standard deviation, STD, should be left to mathematicians, physicists and mathematical statisticians deriving limit theorems. There is no scientific reason to use it in statistical investigations in the age of the computer, as it does more harm than good—particularly with the growing class of people in social science mechanistically applying statistical tools to scientific problems.

Say someone just asked you to measure the "average daily variations" for the temperature of your town (or for the stock price of a company, or the blood pressure of your uncle) over the past five days. The five changes are: (-23, 7, -3, 20, -1). How do you do it?

Do you take every observation: square it, average the total, then take the square root? Or do you remove the sign and calculate the average? For there are serious differences between the two methods. The first produces an average of 15.7, the second 10.8. The first is technically called the root mean square deviation. The second is the mean absolute deviation, MAD. It corresponds to "real life" much better than the first—and to reality. In fact, whenever people make decisions after being supplied with the standard deviation number, they act as if it were the expected mean deviation.

It is all due to a historical accident: in 1893, the great Karl Pearson introduced the term "standard deviation" for what had been known as "root mean square error". The confusion started then: people thought it meant mean deviation. The idea stuck: every time a newspaper has attempted to clarify the concept of market "volatility", it defined it verbally as mean deviation yet produced the numerical measure of the (higher) standard deviation.

But it is not just journalists who fall for the mistake: I recall seeing official documents from the department of commerce and the Federal Reserve partaking of the conflation, even regulators in statements on market volatility. What is worse, Goldstein and I found that a high number of data scientists (many with PhDs) also get confused in real life.

It all comes from bad terminology for something non-intuitive. By a psychological bias Danny Kahneman calls attribute substitution, some people mistake MAD for STD because the former is easier to come to mind.

1) MAD is more accurate in sample measurements, and less volatile than STD since it is a natural weight whereas standard deviation uses the observation itself as its own weight, imparting large weights to large observations, thus overweighing tail events.

2) We often use STD in equations but really end up reconverting it within the process into MAD (say in finance, for option pricing). In the Gaussian world, STD is about ~1.25 time MAD, that is, the square root of (Pi/2). But we adjust with stochastic volatility where STD is often as high as 1.6 times MAD.

3) Many statistical phenomena and processes have "infinite variance" (sa the popular Pareto 80/20 rule) but have finite, and very well behaved, mean deviations. Whenever the mean exists, MAD exists. The reverse (infinite MAD and finite STD) is never true.

4) Many economists have dismissed "infinite variance" models thinking these meant "infinite mean deviation". Sad, but true. When the great Benoit Mandelbrot proposed his infinite variance models fifty years ago, economists freaked out because of the conflation.

It is sad that such a minor point can lead to so much confusion: our scientific tools are way too far ahead of our casual intuitions, which starts to be a problem with science. So I close with a statement by Sir Ronald A. Fisher: 'The statistician cannot evade the responsibility for understanding the process he applies or recommends.' 

And the probability-related problems with social and biological science do not stop there: it has bigger problems with researchers using statistical notions out of a can without understanding them and babbling "n of 1" or "n large", or "this is anecdotal" (for a large Black Swan style deviation), mistaking anecdotes for information and information for anecdote. It was shown that the majority use regression in their papers in "prestigious" journals without quite knowing what it means, and what claims can—and cannot—be made from it. Because of little check from reality and lack of skin-in-the-game, coupled with a fake layer of sophistication, social scientists can make elementary mistakes with probability yet continue to thrive professionally.

lisa_feldman_barrett's picture
University Distinguished Professor of Psychology, Northeastern University; Research Neuroscientist, Massachusetts General Hospital; Lecturer in Psychiatry, Harvard Medical School; Author, Seven and a Half Lessons About the Brain

Essentialist thinking is the belief that familiar categories—dogs and cats, space and time, emotions and thoughts—each have an underlying essence that makes them what they are. This belief is a key barrier to scientific understanding and progress. In pre-Darwinian biology, for example, scholars believed each species had an underlying essence or physical type, and variation was considered error. Darwin challenged this essentialist view, observing that a species is a conceptual category containing a population of varied individuals, not erroneous variations on one ideal individual. Even as Darwin's ideas became accepted, essentialism held fast, as biologists declared that genes are the essence of all living things, fully accounting for Darwin's variation. Nowadays we know that gene expression is regulated by the environment, a discovery that—after much debate—prompted a paradigm shift in biology.

In physics, before Einstein, scientists thought of space and time as separate physical quantities. Einstein refuted that distinction, unifying space and time and showing that they are relative to the perceiver. Even so, essentialist thinking is still seen every time an undergraduate asks, "If the universe is expanding, what is it expanding into?"

In my field of psychology, essentialist thought still runs rampant. Plenty of psychologists, for example, define emotions as behaviors (e.g., a rat freezes in fear, or attacks in anger), each triggered automatically by its own circuit, so that the circuit for the behavior (freezing, attacking) is the circuit for the emotion (fear, anger). When other scientists showed that, in fact, rats have varied behaviors in fear-evoking situations—sometimes freezing, but other times running away or even attacking—this inconsistency was "solved" by redefining fear to have multiple types, each with its own essence. This technique of creating ever finer categories, each with its own biological essence, is considered scientific progress, rather than abandoning essentialism as Darwin and Einstein did. Fortunately, other approaches to emotion have arisen that do not require essences. Psychological construction, for example, considers an emotion like fear or anger to be a category with diverse instances just as Darwin did with species.

Essentialism can also be seen in studies that scan the human brain, trying to locate the brain tissue that is dedicated to each emotion. At first, scientists assumed that each emotion could be localized to a specific brain region (e.g., fear occurs in the amygdala), but they found that each region is active for a variety of emotions, more than one would expect by chance. Since then, scientists have been searching for the brain essence of each emotion in dedicated brain networks, and in probabilistic patterns across the brain, always with the assumption that each emotion has an essence to be found, rather than abandoning essentialism.

The fact that different brain regions and networks show increased activity during different emotions is not a problem just for emotion research. They also show increased activation during other mental activities such as cognitions and perceptions, and have been implicated in mental illnesses from depression to schizophrenia to autism. This lack of specificity has led to claims (in news stories, blogs, and popular books) that we have learned nothing from brain imaging experiments. This seeming failure is actually a success. The data are screaming out that essentialism is wrong: individual brain regions, circuits, networks and even neurons are not single-purpose. The data are pointing to a new model of how the brain constructs the mind. Scientists understand data through the lens of their assumptions, however. Until these assumptions change, scientific progress will be limited.

Some topics in psychology have advanced beyond essentialist views. Memory, for example, was once thought to be a single process, and later was split into distinct subtypes like semantic memory and episodic memory. Memories are now considered to be constructed within the brain's functional architecture and not to reside in specific brain tissue. One hopes that other areas of psychology and neuroscience will soon follow suit. For example, cognition and emotion are still considered separate processes in the mind and brain, but there is growing evidence that the brain does not respect this division. This means every psychological theory in which emotions and cognitions battle each other, or in which cognitions regulate emotions, is wrong.

Ridding science of essentialism is easier said than done. Consider the simplicity of this essentialist statement from the past: "Gene X causes cancer." It sounds plausible and takes little effort to understand. Compare this to a more recent explanation: "A given individual in a given situation, who interprets that situation as stressful, may experience a change in his sympathetic nervous system that encourages certain genes to be expressed, making him vulnerable to cancer." The latter explanation is more complicated, but more realistic. Most natural phenomena do not have a single root cause. Sciences that are still steeped in essentialism need a better model of cause and effect, new experimental methods, and new statistical procedures to counter essentialist thinking.

This discussion is more than a bunch of metaphysical musings. Adherence to essentialism has serious, practical impacts on national security, the legal system, treatment of mental illness, the toxic effects of stress on physical illness... the list goes on. Essentialism leads to simplistic "single cause" thinking when the world is a complex place. Research suggests that children are born essentialists (what irony!) and must learn to overcome it. It's time for all scientists to overcome it as well.

Essentialist thinking is the belief that familiar categories—dogs and cats, space and time, emotions and thoughts—each have an underlying essence that makes them what they are. This belief is a key barrier to scientific understanding and progress. In pre-Darwinian biology, for example, scholars believed each species had an underlying essence or physical type, and variation was considered error. Darwin challenged this essentialist view, observing that a species is a conceptual category containing a population of varied individuals, not erroneous variations on one ideal individual. Even as Darwin's ideas became accepted, essentialism held fast, as biologists declared that genes are the essence of all living things, fully accounting for Darwin's variation. Nowadays we know that gene expression is regulated by the environment, a discovery that—after much debate—prompted a paradigm shift in biology.

In physics, before Einstein, scientists thought of space and time as separate physical quantities. Einstein refuted that distinction, unifying space and time and showing that they are relative to the perceiver. Even so, essentialist thinking is still seen every time an undergraduate asks, "If the universe is expanding, what is it expanding into?"

In my field of psychology, essentialist thought still runs rampant. Plenty of psychologists, for example, define emotions as behaviors (e.g., a rat freezes in fear, or attacks in anger), each triggered automatically by its own circuit, so that the circuit for the behavior (freezing, attacking) is the circuit for the emotion (fear, anger). When other scientists showed that, in fact, rats have varied behaviors in fear-evoking situations—sometimes freezing, but other times running away or even attacking—this inconsistency was "solved" by redefining fear to have multiple types, each with its own essence. This technique of creating ever finer categories, each with its own biological essence, is considered scientific progress, rather than abandoning essentialism as Darwin and Einstein did. Fortunately, other approaches to emotion have arisen that do not require essences. Psychological construction, for example, considers an emotion like fear or anger to be a category with diverse instances just as Darwin did with species.

Essentialism can also be seen in studies that scan the human brain, trying to locate the brain tissue that is dedicated to each emotion. At first, scientists assumed that each emotion could be localized to a specific brain region (e.g., fear occurs in the amygdala), but they found that each region is active for a variety of emotions, more than one would expect by chance. Since then, scientists have been searching for the brain essence of each emotion in dedicated brain networks, and in probabilistic patterns across the brain, always with the assumption that each emotion has an essence to be found, rather than abandoning essentialism.

The fact that different brain regions and networks show increased activity during different emotions is not a problem just for emotion research. They also show increased activation during other mental activities such as cognitions and perceptions, and have been implicated in mental illnesses from depression to schizophrenia to autism. This lack of specificity has led to claims (in news stories, blogs, and popular books) that we have learned nothing from brain imaging experiments. This seeming failure is actually a success. The data are screaming out that essentialism is wrong: individual brain regions, circuits, networks and even neurons are not single-purpose. The data are pointing to a new model of how the brain constructs the mind. Scientists understand data through the lens of their assumptions, however. Until these assumptions change, scientific progress will be limited.

Some topics in psychology have advanced beyond essentialist views. Memory, for example, was once thought to be a single process, and later was split into distinct subtypes like semantic memory and episodic memory. Memories are now considered to be constructed within the brain's functional architecture and not to reside in specific brain tissue. One hopes that other areas of psychology and neuroscience will soon follow suit. For example, cognition and emotion are still considered separate processes in the mind and brain, but there is growing evidence that the brain does not respect this division. This means every psychological theory in which emotions and cognitions battle each other, or in which cognitions regulate emotions, is wrong.

Ridding science of essentialism is easier said than done. Consider the simplicity of this essentialist statement from the past: "Gene X causes cancer." It sounds plausible and takes little effort to understand. Compare this to a more recent explanation: "A given individual in a given situation, who interprets that situation as stressful, may experience a change in his sympathetic nervous system that encourages certain genes to be expressed, making him vulnerable to cancer." The latter explanation is more complicated, but more realistic. Most natural phenomena do not have a single root cause. Sciences that are still steeped in essentialism need a better model of cause and effect, new experimental methods, and new statistical procedures to counter essentialist thinking.

This discussion is more than a bunch of metaphysical musings. Adherence to essentialism has serious, practical impacts on national security, the legal system, treatment of mental illness, the toxic effects of stress on physical illness... the list goes on. Essentialism leads to simplistic "single cause" thinking when the world is a complex place. Research suggests that children are born essentialists (what irony!) and must learn to overcome it. It's time for all scientists to overcome it as well.

melanie_swan's picture
Philosophy and Economic Theory, the New School for Social Research

The scientific idea that is most ready for retirement is the scientific method itself. More precisely it is the idea that there would be only one scientific method, one exclusive way of obtaining scientific results. The problem is that the traditional scientific method as an exclusive approach is not adequate to the new situations of contemporary science like big data, crowdsourcing, and synthetic biology. Hypothesis-testing through observation, measurement, and experimentation made sense in the past when obtaining information was scarce and costly, but this is no longer the case. In recent decades, we have already been adapting to a new era of information abundance that has facilitated experimental design and iteration. One result is that there is now a field of computational science alongside nearly every discipline, for example computational biology and digital manuscript archiving. Information abundance and computational advance has promulgated the evolution of a scientific model that is distinct from the traditional scientific method, and three emerging areas are advancing it even more.

Big data, the creation and use of large and complex cloud-based data sets, is one pervasive trend that is reshaping the conduct of science. The scale is immense: organizations routinely process millions of transactions per hour into hundred-petabyte databases. Worldwide annual data creation is currently doubling and estimated to reach 8 zettabytes in 2015. Even before the big data era, modeling, simulating, and predicting became a key computational step in the scientific process and the new methods required to work with big data make the traditional scientific method increasingly less relevant. Our relationship to information has changed with big data. Previously in the era of information scarcity, all data was salient. In a calendar for example, every data element, or appointment, is important and intended for action. With big data, the opposite is true, 99% of the data may be irrelevant (immediately, over time, or once processed into higher resolution). The focus becomes extracting points of relevance from an expansive whole, looking for signal from noise, anomalies, and exceptions, for example genomic polymorphisms. The next level of big data processing is pattern recognition. High sampling frequencies allow not only point-testing of phenomena (as in the traditional scientific method), but its full elucidation over multiple time frames and conditions. For the first time longitudinal baseline norms, variance, patterns, and cyclical behavior can be obtained. This requires thinking beyond the simple causality of the traditional scientific method into extended systemic models of correlation, association, and episode triggering. Some of the prominent methods used in big data discovery include machine learning algorithms, neural networks, hierarchical representation, and information visualization.

Crowdsourcing is another trend reshaping the conduct of science. This is the coordination of large numbers of individuals (the crowd) through the Internet to participate in some activity. Crowd models have led to the development of a science ecosystem that includes the professionally-trained institutional researcher using the traditional scientific method at one end, and the citizen scientist exploring issues of personal interest through a variety of methods at the other. In between are different levels of professionally-organized and peer-coordinated efforts. The Internet (and the trend to Internet-connect all people - 2 billion now estimated to be 5 billion in 2020) enables very-large scale science. Not only are existing studies cheaper and quicker in crowdsourced cohorts, but studies 100x the size and detail of previous studies are now possible. The crowd can provide volumes of data by automatically linking quantified self-tracking gadgets to data commons websites. Citizen scientists participate in light information-processing and other data collection and analysis activities through websites like Galaxy Zoo. The crowd is engaged more extensively through crowdsourced labor marketplaces (initially like Mechanical Turk, now increasingly skill-targeted), data competitions, and serious gaming (like predicting protein folding and RNA conformation). New methods for the conduct of science are being innovated through DIY efforts, the quantified self, biohacking, 3D printing, and collaborative peer-based studies.

Synthetic biology is a third wide-spread trend reshaping the conduct of science. Lauded as the potential 'transistor of the 21st century' given its transformative possibilities, synthetic biology is the design and construction of biological devices and systems. It is highly multi-disciplinary, linking biology, engineering, functional design, and computation. One of the key application areas is metabolic engineering, working with cells to greatly expand their usual production of substances that can then be used for energy, agricultural, and pharmaceutical purposes. The nature of synthetic biology is pro-actively creating de novo biological systems, organisms, and capacities, which is the opposite of the esprit of the passive characterization of phenomena for which the original scientific method was developed. While it is true that optimizing genetic and regulatory processes within cells can be partially construed under the scientific method, the overall scope of activity and methods are much broader. Innovating de novo organisms and functionality requires a significantly different scientific methodology than that supported by the traditional scientific method, and includes a re-conceptualization of science as an endeavor of characterizing and creating.

In conclusion, we can no longer rely exclusively on the traditional scientific method in the new era of science emerging through areas like big data, crowdsourcing, and synthetic biology. A multiplicity of models must be employed for the next generation of scientific advance, supplementing the traditional scientific method with new ways that are better suited and equally valid. Not only is a plurality of methods required, it opens up new tiers for the conduct of science. Science can now be carried out downstream at increasingly detailed levels of resolution and permutation, and upstream with broader systemic dynamism. Temporality and the future become more knowable and predictable as all processes, human and otherwise, can be modeled with continuous real-time updates. Epistemologically, how 'we know' and the truth of the world and reality is changing. In some sense we may be in a current intermediary 'dark ages node' where the multiplicity of future science methods can pull us into a new era of enlightenment just as surely as the traditional scientific method pulled us into modernity. 

irene_pepperberg's picture
Research Associate & Lecturer, Harvard; Author, Alex & Me

Yes, humans do some things that other species do not—we are indeed the only species to send probes to outer space to find other forms of life—but the converse is certainly equally true. Other species do things humans find impossible, and many nonhuman species are indeed unique in their abilities. No human can detect temperature changes of a few hundredths of a degree as can some pit vipers, nor can humans best a dog at following faint scents. Dolphins hear at ranges impossible for humans and, along with bats, can use natural sonar. Bees and many birds see in the ultraviolet, and many birds migrate thousands of miles yearly, under their own power with what seems to be some kind of internal GPS. Humans, of course, can and will invent machines to accomplish such feats of nature, unlike our nonhuman brethren—but nonhumans had these abilities first. Clearly I don't contest data that show that humans are unique in many ways, and I certainly favor studying the similarities and differences across species, but think it is time to retire the notion that human uniqueness is a pinnacle of some sort, denied in any shape, way, or form to other creatures.

Another reason for retiring the idea of humaniqueness as the ideal endpoint of some evolutionary process is, of course, that our criteria for uniqueness inevitably need redefinition. Remember when "man, the tool-user" was our definition? At least until along came species like cactus-spike-using Galapagos finches, sponge-wielding dolphins, and now even crocodiles that use sticks to lure birds to their demise. Then it was "man, the tool-maker"…but that fell out of favor when such behavior was seen in a number of other creatures, including species so evolutionary-distant from humans as New Caledonian crows. Learning through imitation? Almost all songbirds do it to some extent vocally, and minor evidence exists for physical aspects in parrots and apes. I realize that current research does demonstrate that apes, for example, are lacking in certain aspects of collaborative abilities seen in humans, but have to wonder if different experimental protocols might provide different data in the future.

The comparative study of behavior needs to be expanded and supported, but not merely to find more data enshrining humans as "special". Finding out what makes us different from other species is a worthy enterprise, but it can also lead us to find out what is "special" about other beings, what incredible things we may need to learn from them. So, for example, we need more studies to determine the extent to which nonhumans show empathy or exhibit various aspects of 'theory of mind", to learn what is needed for survival in both their natural environment and what they can acquire when enculturated into ours. Maybe they have other means of accomplishing the social networking we take as at least a partial requisite for humanness. We need to find out what aspects of human communication skills they can acquire—but we also can't lose sight of the need to uncover the complexities that exist in their own communication systems.

Note Bene: Lest my point be misunderstood: My argument is a different one from that of bestowing personhood on various nonhuman species, and is separate from other arguments for animal rights and even animal welfare—although I can see the possible implications of what I am proposing.

All told, it seems to me that it is time to continue to study all the complexities of behavior in all species, human and nonhuman, to concentrate on similarities as well as differences, and—in many cases—to appreciate the inspiration that our nonhuman compatriots provide in order to develop tools and skills that enhance our own abilities, rather than simply to consign nonhumans to a second-class status.

steve_fuller's picture
Philosopher; Auguste Comte Chair in Social Epistemology, University of Warwick; Author, The Proactionary Imperative: A Foundation for Transhumanism

It is difficult to deny that humans began as Homo sapiens, an evolutionary offshoot of the primates. Nevertheless, for most of what is properly called 'human history' (i.e., the history that starts with the invention of writing), most of Homo sapiens have not qualified as 'human'—and not simply because they were too young or too disabled. In sociology, we routinely invoke a trinity of shame—'race, class, and gender'—to characterise the gap that remains between the normal existence of Homo sapiens and the normative ideal of full humanity. Much of the history of social science can be understood as either directly or indirectly aimed at extending the attribution of humanity to as much of Homo sapiens as possible. It is for this reason that the welfare state is very reasonably touted as social science's great contribution to politics in the modern era. But perhaps membership in Homo sapiens is neither sufficient nor even necessary to qualify a being as 'human'. What happens then?

In constructing a scientifically viable concept of the human, we could do worse than take a lesson from republican democracies, which bestow citizenship on those whom its members are willing to treat as 'equals' in some legally prescribed sense of reciprocal rights and duties. Republican citizenship is about the mutual recognition of peers, not a status of grace bestowed by some overbearing monarch. Moreover, republican constitutions define citizenship in terms that do not make explicit reference to the inherited qualities of the citizenry. Birth in the republic does not constitute a privilege over those who have had to earn their citizenship. A traditional expression of this idea is that those born to citizens are obliged to perform 'national service' to validate their citizenship. The United States has exceeded the wildest hopes of republican theorists (who tended think in terms of city-states), given its historically open-door immigration policy yet consistently strong sense of self-identity—not least among recent immigrants. 

In terms of a scientifically upgraded version of 'human rights' that might be called 'human citizenship', let us imagine this 'open-door immigration policy' as ontological rather than geographical in nature. Thus, non-Homo sapiens may be allowed to migrate to the space of the 'human'. Animal rights activists believe that they are already primed for this prospect. They can demonstrate that primates and aquatic mammals are not only sentient but also engaged in various higher cognitive functions, including what is nowadays called 'mental time-travel'. This is the ability to set long-term goals and pursue them to completion because the envisaged value of the goal overrides that of the diversions encountered along the way. While this is indeed a good empirical marker of the sort of autonomy that has been historically required for republican citizenship, in practice animal rights activists embed this point in an argument for de facto species segregationism, a 'separate but equal' policy, in which the only enforceable sense of 'rights' is one of immunity from bodily harm from humans. It is the sense of 'rights' qua dependency that a child or a disabled person might enjoy.

The fact that claims to 'animal rights' carry no sense of reciprocal obligations on the part of the animals towards humans raises question about the activists' sincerity in appealing to 'rights' at all. However, if the activists are sincere, then they should also call for a proactive policy of what the science fiction writer David Brin has termed 'uplift', whereby we prioritise research designed to enable cognitively privileged creatures, regardless of material origin, to achieve capacities that enable them to function as peers in what may be regarded as an 'expanded circle of humanity'. Such research may focus on gene therapy or prosthetic enhancement, but in the end it would inform a 'Welfare State 2.0' that takes seriously our obligation to all of those whom we regard as capable of being rendered 'human', in the sense of fully autonomous citizens in The Republic of Humanity.

The idea that 'Human being = Homo sapiens' has always had a stronger basis in theology than biology. Only the Abrahamic religions have clearly privileged the naked ape over all other creatures. Evolutionists of all stripes have seen only differences in degree as separating the powers of living things, with relatively few evolutionists expecting that a specific bit of genetic material will someday reveal the 'uniquely human'. All the more reason to think that in a future where some version of evolution prevails that republican theories of 'civil rights' are likely to point the way forward. This prospect implies that every candidate being will need to earn the status of 'human' by passing certain criteria as determined by those in the society in which he, she, or it would propose to live. The Turing Test provides a good prototype for examining eligibility into this expanded circle of humanity, given the test's neutrality to material substratum.

It is not too early to construct Turing Test 2.0 tests of 'human citizenship' that attempt to capture the full complexity of the sorts of beings that we would have live among us as equals. A good place to start would be with a sympathetic rendering of long-standing—and too easily dismissed—'anthropomorphic' attributions to animals and machines. Welfare State 2.0 policies could be then designed to enable a wide assortment of candidate beings—from carbon to silicon—to meet the requisite standard of citizenship implied in such attributions. Indeed, many classic welfare state policies such as compulsory mass education and childhood vaccination can be understood retrospectively as the original political commitment to 'uplift' in Brin's sense—but applied only to members of Homo sapiens living within the territory governed by a nation-state.

However, by removing the need to be Homo sapiens to qualify for human citizenship, we are faced with a political situation that is comparable to the European Union's policy for the accession of new member states. The policy assumes that candidate states start with certain historical disadvantages vis-à-vis membership in the Union but that these are in principle surmountable. Thus, there is a pre-accession period in which the candidate states are monitored for political and economic stability, as well as treatment of its own citizens, after which 'integration' occurs in stages—starting with free mobility of students and workers, the harmonisation of laws, and revenue transfers from more established member states. To be sure, there is pushback by both the established and the candidate member states. But notwithstanding these painful periods of mutual adjustment, the process has so far worked and may prove a model for the ontological union of humanity.   

adam_waytz's picture
Psychologist; Associate Professor of Management and Organizations, Kellogg School of Management at Northwestern University; Author, The Power of Human

For reinforcing a perilous social psychological imperialism toward other behavioral sciences and for suggesting that humans are naturally oriented toward others, the strong interpretation of Aristotle's famous aphorism needs to be retired. Certainly sociality is a dominant force that shapes thought, behavior, physiology, and neural activity. However, enthusiasm over the social brain, social hormones, and social cognition must be tempered with evidence that being social is far from easy, automatic, or infinite. This is because our (social) brains, (social) hormones, and (social) cognition on which social processes rely must first be triggered before they do anything for us.

One of the most compelling pieces of evidence for humans' ostensibly automatic social nature comes from Fritz Heider and Mary Simmel's famous 1944 animation of two triangles and a circle orbiting a rectangle. The animation depicts merely shapes, yet people find it nearly impossible not to construe these objects as human actors, and to construct a social drama around their movements. A closer look at the video, and a closer reading of Heider and Simmel's article describing the phenomenon suggests that the perception of these shapes in social terms is not automatic, but must be evoked by features of the stimuli and situation. These shapes were designed to move in trajectories that specifically mimic social behavior—if the shapes' motion is altered or reversed, they fail to elicit the same degree of social responses. Furthermore, participants in the original studies of this animation were prompted to describe the shapes in social terms based on the language and instructions the experimenters used. Humans may be ready and willing to view the world through a social lens, but they do not do so automatically. 
 
Despite possessing capacities far beyond other animals to consider others' minds, to empathize with others' needs, and to transform empathy into care and generosity, we fail to employ these abilities readily, easily, or equally. We engage in acts of loyalty, moral concern, and cooperation primarily toward our inner circles, but do so at the expense of people outside of those circles. Our altruism is not unbounded; it is parochial. In support of this phenomenon, the hormone oxytocin, long considered to play a key role in forming social bonds, has been shown to facilitate affiliation toward one's ingroup, but can increase defensive aggression toward one's outgroup. Other research suggests that this self-sacrificial intragroup love co-evolved with intergroup war, and that societies who most value loyalty to each other tend to be those most likely to endorse violence toward outgroups.
 
Even arguably our most important social capacity, theory of mind—the ability to adopt the perspectives of others—can increase competition as much as it increases cooperation, highlighting the emotions and desires of those we like, but also highlighting the selfish and unethical motives of people we dislike. Furthermore, for us to consider the minds of others in the first place requires that we are motivated and possess the necessary cognitive resources. Because motivation and cognition are finite, so too is our capacity to be social. Thus, any intervention that intends to increase consideration of others in terms of empathy, benevolence, and compassion is limited in its ability to do so. At some point, the well of working memory on which our most valuable social abilities rely will run dry.
 
Because our social capacities are largely non-automatic, ingroup-focused, and finite, we can retire the strong version of Aristotle's statement. At the same time, the concept of humans as "social by nature" has lent credibility to numerous significant ideas: that humans need other humans to survive, that humans tend to be perpetually ready for social interaction, and that studying specifically the social features of human functioning is profoundly important. 
 
andy_clark's picture
Professor of Cognitive Philosophy, Department of Philosophy and Department of Informatics, University of Sussex, Brighton, UK; Author, Surfing Uncertainty: Prediction, Action, and the Embodied Mind

It's time to retire the image of the mind as a kind of cognitive couch potato—a passive machine that spends its free time just sitting there waiting for an input to arrive to enliven its day. When an input arrives, this view suggests, the system swings briefly into action, processing the input and preparing some kind of output (the response, which might be a motor action or some kind of decision, categorization, or judgement). Output delivered, the cognitive couch potato in your head slumps back awaiting the next stimulation.

The true story looks to be almost the reverse. Naturally intelligent systems (humans, other animals) are not passively awaiting sensory stimulation. Instead, they are constantly active, trying to predict the streams of sensory stimulation before they arrive. When an 'input' (itself a dodgy notion) arrives on the scene, our pro-active cognitive systems were already busy predicting its shape and implications. Systems like that are already (pretty much constantly) poised to act, and all they need to process are any sensed deviations from the predicted state.

Action itself then needs to be reconceived. Action is not so much a response to an input ('input-output-stop') as a neat and efficient way of selecting the next 'input', driving a rolling cycle. These hyperactive systems are constantly predicting their own upcoming states, and moving about so as to bring some of them into being. In this way we bring forth the evolving streams of sensory information that keep us viable (keeping us fed, warm, and watered) and that serve our increasingly recondite ends.

As ever-active prediction engines these kinds of minds are not, fundamentally, in the business of solving puzzles given to them as inputs. Rather, they are in the business of keeping us one step ahead of the game, poised to act and actively eliciting the sensory flows that keep us viable and fulfilled.

Just about every aspect of the passive input-output model is thus false. We are not cognitive couch potatoes so much as proactive predictavores, forever trying to stay one step ahead of the incoming waves of sensory stimulation. Keeping this in mind will help us to design better experiments, build better robots, and appreciate the deep continuities binding life and mind.

gordon_kane's picture
Theoretical Particle Physicist and Cosmologist; Victor Weisskopf Distinguished University Professor, University of Michigan; Author, Supersymmetry and Beyond

Of course it seems obvious that our world has three space dimensions, as obvious as that the sun orbits the earth. Physics theories typically predict aspects of the world that we do not see. For example, Maxwell’s theory of electromagnetism correctly predicted that the spectrum of light we see was just a part of the full spectrum, which extended into infrared and ultraviolet waves invisible to us.

String theory predicts our world has more than three space dimensions. Contrary to much that is written and said, as I will explain here, string theory is broadly predictive and testable. Before I explain its testability, I will describe why great progress in making a comprehensive underlying theory of our physical world may emerge from formulating theories in more than 3 space dimensions. I’ll call it a "final theory" following Steven Weinberg.

What could we gain by giving up the idea that our world has three space dimensions (3D)? String theory emerged when John Schwarz and Michael Green noticed in summer 1984 that it was possible to write a mathematically consistent quantum theory of gravity only in 10D (10 space-time dimensions). That’s a big gain and clue. For me and some theorists it’s even more important that string theories address all or nearly all of the issues and questions that need answering in order to have a final theory. There has been major progress here in the past decade. The initial excess optimism of string theorists caused an overcompensation, now tempered by increasingly many results. The highly successful and well tested 4D so-called "Standard Models" of particle physics and of cosmology provide powerful accurate and complete (with the discovery of the Higgs boson) descriptions of the world we see, but do not provide explanations and understanding for a number of issues that are addressed by string theory. The success of the Standard Model(s) is strong evidence that sticking with the 4D world gets in the way of going beyond description to explaining and understanding.

To explain our universe, obviously the higher dimension string theories have to be projected onto a 4D universe, a process with the understandable but unfortunate name "compactification" (for historical reasons). Experiments and observations have to be done in our 4D universe, so only compactified theories can be directly tested. Compactified string theories address why the universe is mainly made of matter and not antimatter, what the dark matter is, why quarks and leptons come in three similar families, what the individual quark and lepton masses are, the existence of the Higgs mechanism and how it gives mass to quarks and leptons and force-carrying bosons, cosmological history from the end of inflation to the origins of nuclei (after which the Standard Model takes over), the cause of inflation, and much more. Compactified string theories successfully predicted (before the measurements) the mass and properties of the Higgs boson discovered at CERN in 2012, and make predictions for the existence of "supersymmetric partner particles" some of which should be produced and detected at the upgraded CERN collider in 2015 if it functions as planned. Examples already exist in compactified string theories for all of these. All of this is research in progress, so much still needs to be worked out and understood better, and tested at colliders and in dark matter and other experiments, but we can already see that all these exciting opportunities exist.

In 1995 Edward Witten argued that there was an 11D theory he called M-theory which could give a consistent quantum theory of gravity, and that it could be projected onto several 10D string theories in different ways. They had names like Heterotic or Type II. Those 10 D theories could then be compactified to 4D theories (with 6 small curled up dimensions), and make testable predictions as described above. M-theory can also be compactified directly onto a 7D curled up (G2) manifold plus four large space-time dimensions. The study of such theories is ongoing. The compactified theories are testable in the traditional way of testing physics theories for four centuries. In fact, they are testable in the same sense as Newton’s second law, F=ma. F=ma is not testable in general, but only for one force at a time – for a given force and mass object one calculates the predicted acceleration and measures it. Similarly, the form the small extra dimensions take for compactified M/string theories leads to calculable and testable predictions.

A nice example of how the string theories may help comes from the Higgs boson mass. In the Standard Model the Higgs boson mass cannot be predicted at all. The extension of the Standard Model to the theory called the supersymmetric Standard Model predicts an upper limit on the Higgs boson mass, but cannot make an accurate prediction of the mass. Compactified M-theory allows a prediction (made by me with students and colleagues) with an accuracy of a few per cent, in 2011 before the CERN measurements, and confirmed by subsequent data.

If we want to understand and explain our world, going beyond even a full mathematical description, we should take seriously and work on 10D string theories or 11D M-theory, compactifying them to our apparent 4D world. People often say that string theories are complicated. Actually, compactified M/string theories seem to be the simplest theories that could encompass and integrate all the phenomena of the physical world into one coherent mathematical theory.

stewart_brand's picture
Founder, the Whole Earth Catalog; Co-founder, The Well; Co-Founder, The Long Now Foundation, and Revive & Restore; Author, Whole Earth Discipline

In his 1976 book, A Scientist at the White House, George Kistiakowsky, President Eisenhower's Science Advisor, told what he wrote in his diary in 1960 on being exposed to the idea by the Federal Radiation Council:

It is a rather appalling document which takes 140 pages to state the simple fact that since we know virtually nothing about the dangers of low-intensity radiation, we might as well agree that the average population dose from man-made radiation should be no greater than that which the population already receives from natural causes; and that any individual in that population shouldn't be exposed to more than three times that amount, the latter figure being, of course, totally arbitrary.  

Later in the book, Kistiakowsky, who was a nuclear expert and veteran of the Manhattan Project, wrote: "...A linear relation between dose and effect... I still believe is entirely unnecessary for the definition of the current radiation guidelines, since they are pulled out of thin air without any knowledge on which to base them."

Sixty-three years of research on radiation effects have gone by, and Kistiakowsky's critique still holds. The Linear No-Threshold (LNT) Radiation Dose Hypothesis, which surreally influences every regulation and public fear about nuclear power, is based on no knowledge whatever.

At stake is the hundreds of billions spent on meaningless levels of "safety" around nuclear power plants and waste storage, the projected costs of next-generation nuclear plant designs to reduce greenhouse gases worldwide, and the extremely harmful episodes of public panic that accompany rare radiation-release events like Fukushima and Chernobyl. (No birth defects whatever were caused by Chernobyl, but fear of them led to 100,000 panic abortions in the Soviet Union and Europe. What people remember about Fukushima is that nuclear opponents predicted that hundreds or thousands would die or become ill from the radiation. In fact nobody died, nobody became ill, and nobody is expected to.)

The "Linear" part of the LNT is true and well documented. Based on long-term studies of survivors of the atomic bombs in Japan and of nuclear industry workers, the incidence of eventual cancer increases with increasing exposure to radiation at levels above 100 millisieverts/year. The effect is linear. Below 100 millisieverts/year, however, no increased cancer incidence has been detected, either because it doesn't exist or because the numbers are so low that any signal gets lost in the epidemiological noise.

We all die. Nearly a half of us die of cancer (38% of females, 45% of males). If the "No-Threshold" part of the LNT is taken seriously, and an exposed population experiences as much as a 0.5% increase in cancer risk, it simply can not be detected. The LNT operates on the unprovable assumption that the cancer deaths exist, even if the increase is too small to detect, and that therefore "no level of radiation is safe" and every extra millisievert is a public health hazard.

Some evidence against the "No-Threshold" hypothesis draws on studies of background radiation. In the US we are all exposed to 6.2 millisieverts a year on average, but if varies regionally. New England has lower background radiation, Colorado is much higher, yet cancer rates in New England are higher than in Colorado—an inverse effect. Some places in the world such as Ramsar, Iran, have a tenfold higher background radiation, but no higher cancer rates have been discovered there. These results suggest that there is indeed a Threshold below which radiation is not harmful.

Furthermore, recent research at the cell level shows a number of mechanisms for repair of damaged DNA and for ejection of damaged cells up to significant radiation levels. This is not surprising given that life evolved amid high radiation and other threats to DNA. The DNA repair mechanisms that have existed in yeast for 800 million years are also present in humans.

The actual threat of low-dose radiation to humans is so low that the LNT hypothesis can neither be proven true nor proven false, yet it continues to dominate and misguide policies concerning radiation exposure, making them grotesquely conservative and expensive. Once the LNT is explicitly discarded, we can move on to regulations that reflect only discernible, measurable medical effects, and that respond mainly to the much larger considerations of whole-system benefits and harms.

The most crucial decisions about nuclear power are at the category level of world urban prosperity and climate change, not imaginary cancers per millisievert.

satyajit_das's picture
Former Financier; Author, Age of Stagnation

Parallax describes the apparent change in the direction of a moving object caused by alteration in the observer's position. In the graphic work of M.C. Escher, human faculties are similarly deceived and an impossible reality made plausible.

While not strictly a scientific theorem, anthropocentrism, the assessment of reality through an exclusively human perspective, is deeply embedded in science and culture. Improving knowledge requires abandoning anthropocentricity or, at least, acknowledging its existence.

Anthropocentrism's limits derive from the physical constraints of human cognition and specific psychological attitudes. Being human entails specific faculties, intrinsic attitudes, values and belief systems that shape enquiry and understanding.

The human mind has evolved a specific physical structure and bio-chemistry that shapes thought processes. The human cognitive system determines our reasoning and therefore our knowledge. Language, logic, mathematics, abstract thought, cultural beliefs, history and memories create a specific human frame of reference, which may restrict what we can know or understand.

There may be other forms of life and intelligence. The ocean has revealed creatures that live from chemo-synthesis in ecosystems around deep-sea hydrothermal vents, without access to sunlight. Life forms based on materials other than carbon may also be feasible. An entirely radical set of cognitive frameworks and alternative knowledge cannot be discounted.

Like a train that can only run on tracks that determine direction and destination, human knowledge may ultimately be constrained by what evolution has made us.

Knowledge was originally driven by the need to master the natural environment to meet basic biological needs—survival and genetic propagation. It was also needed to deal with the unknown and forces beyond human control. Superstition, religion, science and other belief systems evolved to meets these human needs.

In the eighteenth century, medieval systems of aristocratic and religious authority were supplanted by a new model of scientific method, rational discourse, personal liberty and individual responsibility. But this did not change the basic underlying drivers.

Knowledge is also influenced by human factors—fear and greed, ambition, submission and tribal collusion, altruism and jealousy, as well as complex power relationships and inter-personal group dynamics. Behavioural science illustrates the inherent biases in human thought.

Announcing a boycott of certain "luxury" scientific journals, 2013 Nobel laureate Dr. Randy Schekman argued that to preserve their pre-eminence they acted like "fashion designers who create limited-edition handbags or suits…know[ing] scarcity stokes demand". He argued that science is being distorted by perverse incentives whereby scientists who publish in important journals with a high "impact factor" can expect promotion, pay rises and professional accolades.

Understanding operates within these biological and attitudinal constraints. As Friedrich Nietzsche wrote: "every philosophy hides a philosophy; every opinion is also a hiding place, every word is a mask".

Understanding of fundamental issues remains limited. The cosmological nature and origins of the universe are contested. The physical source and nature of matter and energy are debated. The origins and evolution of biological life remain unresolved.

Resistance to new ideas frequently restricts the development of knowledge. The history of science is a succession of controversies—a non geo-centric universe, continental drift, theory of evolution, quantum mechanics and climate change.

Science, paradoxically, seems to also have inbuilt limits. Like an inexhaustible Russian doll, quantum physics is an endless succession of seemingly infinitely divisible particles. Werner Heisenberg's uncertainty principle posits that human knowledge about the world is always incomplete, uncertain and highly contingent. Kurt Gödel's incompleteness theorems of mathematical logic establish inherent limitations of all but the most trivial axiomatic systems of arithmetic.

Experimental methodology and testing is flawed. Model predictions are often unsatisfactory. As Nassim Nicholas Taleb observed: "You can disguise charlatanism under the weight of equations … there is no such thing as a controlled experiment."

Challenging anthropocentrism does not mean abandoning science or rational thought. It does not mean reversion to primitive religious dogma, messianic phantasms or obscure mysticism.

Transcending anthropocentricity may allow new frames of reference expanding the boundary of human knowledge. It may allow human beings to think more clearly, consider different perspectives and encourage possibilities outside the normal range of experience and thought. It may also allow a greater understanding of our existential place within nature and in the order of things.

As William Shakespeare's Hamlet cautioned a friend: "There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy".

But fundamental biology may not allow the required change of reference framework.

While periodically humbled by the universe, human beings remain enamoured, for the most part, with the proposition that they are the apogee of development. But as Mark Twain observed in Letters from Earth: "He took a pride in man; man was his finest invention; man was his pet, after the housefly."

Writing in The Hitchhikers' Guide to the Galaxy, the late English author Douglas Adams speculated that the earth was a powerful computer and human beings were its biological components designed by hyper-intelligent pan-dimensional beings to answer the ultimate questions about the universe and life. To date, science has not produced a conclusive refutation of this whimsical proposition.

Whether or not we can go beyond anthropocentrism, it is a reminder of our limits. As Martin Rees, Professor of Cosmology and Astrophysics, at Cambridge and Astronomer Royal, noted:

 "Most educated people are aware that we are the outcome of nearly 4 billion years of Darwinian selection, but many tend to think that humans are somehow the culmination. Our sun, however, is less than halfway through its lifespan. It will not be humans who watch the sun's demise, 6 billion years from now. Any creatures that then exist will be as different from us as we are from bacteria or amoebae."  

pascal_boyer's picture
Anthropologist and Psychologist, Washington University in St. Louis; Author, Religion Explained: The Evolutionary Origins of Religious Thought

Culture is like trees. Yes, there are trees around. But that does not mean that we can have a science of trees. Having some rough notion of 'tree' is useful for snakes that lurk and fall on their prey, for birds that build nests, for humans trying to escape from rabid dogs, and of course for landscape designers. But the notion is of no use to scientists. There is nothing much to find out, e.g. to explain growth, reproduction, evolution, that would apply to all and only those things human and snakes and birds think of as 'trees'. Nothing much that would apply to both pines and oaks, to both baobabs and monstrous herbs like the banana tree.

Why do we think there is such a thing as culture? Like 'tree', it is a pretty convenient term. We use it to designate all sorts of things we feel need a general term, like the enormous amount of information that humans acquire from other humans, or the set of idiosyncratic concepts or norms we find in some human groups but not others. There is no evidence that either of these domains corresponds to a proper set of things that science could study and about which it could offer general hypotheses or describe mechanisms.

Don't get me wrong—we can and should engage in a scientific study of 'cultural stuff'. Against the weird obscurantism of many traditional sociologists, historians or anthropologists, human behavior and communication can and should be studied in terms of their natural causes. But this does not imply that there will or should be a science of culture in general.

We can run scientific studies of general principles of human behavior and communication—that is what evolutionary biology and psychology and neurosciences can do—but that is a much broader domain than 'culture'. Conversely, we can run scientific studies of such domains as the transmission of technologies, or the persistence of coordination norms, or the stability of etiquette—but these are much narrower domains than 'culture'. About cultural stuff, as such, in general, I doubt any good science can say anything.

This in a way is not surprising. When we say that some notion or behavior is "cultural", we are just saying that it bears some similarity to notions and behaviors of other people. That is a statistical fact. It does not tell us much about the processes that caused that behavior or notion. As Dan Sperber put it, cultures are epidemics of mental representations. But knowing the epidemiological facts—that this idea is common whereas that one is rare—is of no use unless you know the physiology, so to speak—how this idea was acquired, stored, modified, how it connects to other representations and to behavior. We can say lots of interesting things about the dynamics of transmission, and scholars from Rob Boyd and Pete Richerson to more recent modelers have done just that. But such models do not aim to explain why cultural stuff is the way it is—and there probably is no general answer to that.

Is the idea of culture really a Bad Thing? Yes, a belief in culture as a domain of phenomena has hindered the development of a proper science of human behavior in groups—what ought to be the domain of social sciences. 

First, if you believe that there is such a thing as 'culture', you naturally tend to think that it is a special domain of reality with its own laws. But it turns out that you cannot find the unifying causal principles (because there aren't any). So you marvel at the many-splendored variety and diversity of culture. But culture is splendidly diverse only because it is not a domain at all, just like there is a marvelous variety in the domain of white objects or in the domain of people younger than Socrates.

Second, if you believe in culture as a thing, it seems normal to you that culture should be the same across individuals and across generations. So you treat as unproblematic precisely the phenomenon that is vastly improbable and deserves a special explanation. Human communication does not proceed by direct transfer of mental representations from one brain to another. It consists in inferences from other people's behaviors and utterances, which rarely if ever leads to the replication of ideas. That such processes could lead to roughly stable representations across large numbers of people is a wonderful, anti-entropic process that cries out for explanation.

Third, if you believe in culture you end up believing in magic. You will say that some people behave in a particular way because of "Chinese culture" or "Muslim culture". In other words you will be trying to explain material phenomena— representations and behaviors—in terms of a non-material entity, a statistical fact about similarity. But a similarity does not cause anything. What causes behaviors are mental states.

Some of us aim to contribute to a natural science of human beings as they interact and form groups. We have no need for that social scientific equivalent of phlogiston, the notion of culture.

richard_nisbett's picture
Theodore M. Newcomb Distinguished University Professor of Psychology, University of Michigan; Author, Thinking: A Memoir

Did you know that consuming large amounts of olive oil can reduce your mortality risk by 41 percent? Did you know that if you have cataracts and get them operated on your mortality risk is lowered by 40 percent over the next 15 years compared to people with cataracts who don't get them operated on? Did you know that deafness causes dementia?

Those claims and scores like them appear every day in the media.

They are usually based on studies employing multiple regression analyis (MRA). In MRA a number of independent variables are correlated simultaneously with some dependent variable.

The goal is typically to show that variable A influences variable B "net of" the effects of all the other variables. To put that a little differently, the goal is to show that, at every level of variables C, D and E, an association between A and B is found. For exemple, drinking wine is correlated with low incidence of cardiovascular disease, controlling for (net of) the contributions to cardiovascular disease of social class, excess weight, age, etc., etc.

Epidemiologists, medical researchers, sociologists, psychologists and economists are particularly likely to use this technique, though it can be used in almost any scientific field.

The claims—always at least implicit, often explicit – that MRA can reveal causality are simply mistaken. We know that the target independent variable (consumption of olive oil, for example) brings along its correlations with many other variables—measured in some inevitably imperfect way or not at all. And the level on each of these variables is "self-selected." Any one of these variables could be driving the effects on the dependent variable.

Would you think the number of children in a classroom matters for how well school children learn? It seems reasonable that it would. But a number of MRA studies tell us that, net of average family income of families in the school district, size of the school, IQ test performance, city size, geographic location, etc., average class size is uncorrelated with student performance. The implication: We now know we needn't waste money on decreasing the size of classes.

But researchers have assigned kindergartners through third graders, by the flip of a coin, to either small classes (13 to 17) or larger classes (22-25 per class). The classes with smaller size showed more improvement in standardized test performance; the effect on minority children was greater than the effect on white children. This is not merely another study on the effects of class size. It replaces all the multiple regression studies on class size.

This is the case because it is the experimenter who selects the level on the target independent variable. This means that the experimental classrooms have equally good teachers on average, equally able students, equal social class of students, etc. Thus the only thing that differs between experimental and control classrooms is the independent variable of interest, namely class size.

MRA studies that attempt to "control" for other factors such as social class, age, prior state of health, etc. can't get around the self-selection problem. The sorts of people who get treatment differ from those who don't get it in goodness knows how many ways.

Consider social class. If an investigator wishes to see whether social class is associated with some outcome, anything correlated with social class might be producing or suppressing the effects of class per se. We can be fairly sure that the people consuming all that olive oil are richer, better educated, more knowledgeable about health and more concerned about health (with spouses also more concerned about their health, etc.) They are almost surely less likely to smoke or to drink to excess, and they probably live in less toxic environments than people who use corn oil. They are also more likely to be of Italian descent (Italians are relatively long-lived) than African descent (blacks have generally high mortality rates). All of these variables are candidates for being the true cause of the association between social class and mortality, rather than the consumption of olive oil per se.

Even when there is an attempt to control for all possible variables, they are not necessarily well-measured, which means that their contribution to the target dependent variable will be underestimated. For example, there is no unique correct way to measure social class. Education level, income, wealth, and occupational level are all pieces of the pie and there is no canonical way to weight them to come up the same social-class value that God has in mind.

A New York Times Op-Ed writer, a PhD at Harvard, recently expressed the opinion that MRA studies are superior to experiments because MRA studies based on Big Data can have many more subjects.

The error here is the assumption that having a relatively small number of subjects is likely to mislead. This is mistaken. Larger N is always better than smaller N because we are more likely to detect even small effects. But our confidence in studies is based not on the number of cases but on whether we have unbiased estimates of effects and whether the effects are statistically significant. And in fact, if you have a statistically significant effect with a relatively small number of subjects, this means, other things equal, that your effect is bigger than if it had required a larger number of subjects to reach the same level of significance.

Big data is going to be useful for all kinds of purposes, including generating MRA findings that suggest randomized-design experiments which can provide definitive evidence about whether an apparent effect is real. A lovely example of this kind of sequence results from the 2011 finding in MRA research by Becutti and Pannain that low levels of sleep are associated with obesity. That finding taken by itself is next to meaningless. Bad health outcomes are almost all correlated with each other: overweight people have worse cardiovascular haealth, worse psychological health, use more drugs, get less exercise, etc. But following the MRA research, experimenters have done the requisite experiments. They deprived people of sleep and found that they did in fact gain weight. Not only that, but researchers found hormonal and endocrine consequences of sleep disturbances that mediated the weight gain.

Multiple regression, like all statistical techniques based on correlation, has a severe limitation due to the fact that correlation doesn't prove causation. And no amount of measuring of "control" variables can untangle the web of causality. What nature hath joined together, multiple regression cannot put asunder. 

samuel_barondes's picture
Professor of Neurobiology and Psychiatry, UCSF; Author, Making Sense of People

When Max Planck began studying physics at the University of Munich in 1874 his teacher, Philipp von Jolly, warned him that it was already a mature field with little more to learn. This attitude was widely held through the end of the 19th century. In 1900 Lord Kelvin, the great British physicist, put it clearly: "There is nothing new to be discovered in physics now. All that remains is more and more precise measurement."

In Planck's early career he had no reason to doubt this complacent position. And yet, in the same year that Kelvin made his pronouncement, Planck found himself disproving it. He had been working on the relationship of heat to light, a topic of great interest to the emerging electric companies, and he had proposed an equation that was consistent with classical physical concepts. But he was dismayed to learn of new experimental results that proved him wrong.

With his back against the wall, the 42-year-old Planck quickly thought up an alternative equation that fit the data. But the new equation also had a disruptive effect. Being hard to reconcile with traditional ideas, it turned out to be an initial building block for a completely new view of physics called quantum theory. The resistance to this disruption by conservative members of the physics community may have been what led to Planck's petulant claim that a new scientific truth will not triumph "until its opponents eventually die."

But the triumph of quantum theory did not really depend on this grim prospect. Members of the physics establishment soon began to take quantum theory seriously because it wasn't just a weird idea that had popped into Planck's head. It had become necessary because of a surprising experimental result.

This is how science usually works. When experiments challenge a prevailing idea attention is paid. If the experiments are confirmed the old idea is modified. In fields in which decisive experimentation is relatively easy, such change may happen quickly and is certainly not dependent on the death of its senior practitioners. It is only in fields that don't lend themselves to decisive experimentation that it is hard to definitively challenge a prevailing position. In such fields even death may not be enough, and tenuous positions may survive for generations.

So Plank got it wrong. The development of new scientific truths does not depend on the passing of stubborn conservative opponents. It is, instead, mainly dependent on the continuous enrollment of talented newcomers who are eager to make their mark by changing the existing order. In Planck's case it was, in fact, the arrival of the young Albert Einstein, rather than the demise of his senior opponents, that propelled quantum theory forward. As Douglas Stone showed, in Einstein and the Quantum, it was the 25-year-old patent clerk, a fledgling outsider with nothing to lose, who became the driving force in the development of this theory. As for his elders, Einstein couldn't care less.

jerry_a_coyne's picture
Professor Emeritus, Department of Ecology and Evolution, University of Chicago; Author, Why Evolution is True; Faith Versus Fact: Why Science and Religion are Incompatible.

Among virtually all scientists, dualism is dead. Our thoughts and actions are the outputs of a computer made of meat—our brain—a computer that must obey the laws of physics. Our choices, therefore, must also obey those laws. This puts paid to the traditional idea of dualistic or "libertarian" free will: that our lives comprise a series of decisions in which we could have chosen otherwise. We know now that we can never do otherwise, and we know it in two ways.

The first is from scientific experience, which shows no evidence for a mind separate from the physical brain. This means that "I"—whatever "I" means—may have the illusion of choosing, but my choices are in principle predictable by the laws of physics, excepting any quantum indeterminacy that acts in my neurons. In short, the traditional notion of free will—defined by Anthony Cashmore as "a belief that there is a component to biological behavior that is something more than the unavoidable consequences of the genetic and environmental history of the individual and the possible stochastic laws of nature"—is dead on arrival.

Second, recent experiments support the idea that our "decisions" often precede our consciousness of having made them. Increasingly sophisticated studies using brain scanning show that those scans can often predict the choices one will make several seconds before the subject is conscious of having chosen! Indeed, our feeling of "making a choice" may itself be a post hoc confabulation, perhaps an evolved one.

When pressed, nearly all scientists and most philosophers admit this. Determinism and materialism, they agree, win the day. But they're remarkably quiet about it. Instead of spreading the important scientific message that our behaviors are the deterministic results of a physical process, they'd rather invent new "compatibilist" versions of free will: versions that comport with determinism. "Well, when we order strawberry ice cream we really couldn't have ordered vanilla", they say, "but we still have free will in another sense. And it's the only sense that's important."

Unfortunately, what's "important" differs among philosophers. Some say that what's important is that our complex brain evolved to absorb many inputs and run them through complex programs ("ruminations") before giving an output ("decision"). Others say that what's important is that it's our own brain and nobody else's that makes our decisions, even if those decisions are predetermined. Some even argue that we have free will because most of us choose without duress: nobody holds a gun to our head and says "order the strawberry." But of course that's not true: the guns are the electrical signals in our brain.

In the end, there's nothing "free" about compatibilist free will. It's a semantic game in which choice becomes an illusion: something that isn't what it seems. Whether or not we can "choose" is a matter for science, not philosophy, and science tells us that we're complex marionettes dancing to the strings of our genes and environments. Philosophy, watching the show, says, "pay attention to me, for I've changed the game."

So why does the term "free will" still hang around when science has destroyed its conventional meaning? Some compatibilists, perhaps, are impressed by their feeling that they can choose, and must comport this with science. Others have said explicitly that characterizing "free will" as an illusion will hurt society. If people believe they're puppets, well, then maybe they'll be crippled by nihilism, lacking the will to leave their beds. This attitude reminds me of the (probably apocryphal) statement of the Bishop of Worcester's wife when she heard about Darwin's theory: "My dear, descended from the apes! Let us hope it is not true, but if it is, let us pray it will not become generally known."

What puzzles me is why compatibilists spend so much time trying to harmonize determinism with a historically non-deterministic concept instead of tackling the harder but more important task of selling the public on the scientific notions of materialism, naturalism, and their consequence: the mind is produced by the brain.

These consequences of "incompatibilism" mean a complete rethinking of how we punish and reward people. When we realize that the person who kills because of a mental disorder had precisely as much "choice" as someone who murders from childhood abuse or a bad environment, we'll see that everyone deserves the mitigation now given only to those deemed unable to choose between right and wrong. For if our actions are predetermined, none of us can make that choice. Punishment for crimes will still be needed, of course, to deter others, rehabilitate offenders, and remove criminals from society. But now this can be put on a more scientific footing: what interventions can best help both society and the offender? And we lose the useless idea of justice as retribution.

Accepting incompatibilism also dissolves the notion of moral responsibility. Yes, we are responsible for our actions, but only in the sense that they are committed by an identifiable individual. But if you can't really choose to be good or bad—to punch someone or save a drowning child—what do we mean by moral responsibility? Some may argue that getting rid of that idea also jettisons an important social good. I claim the opposite: by rejecting moral responsibility, we are free to judge actions not by some dictate, divine or otherwise, but by their consequences: what is good or bad for society.

Finally, rejecting free will means rejecting the fundamental tenets of the many religions that depend on freely choosing a god or a savior.

The fears motivating some compatibilists—that a version of free will must be maintained lest society collapse—won't be realized. The illusion of agency is so powerful that even strong incompatibilists like myself will always act as if we had choices, even though we know that we don't. We have no choice in this matter. But we can at least ponder why evolution might have bequeathed us such a powerful illusion. 

paul_davies's picture
Theoretical physicist; cosmologist; astro-biologist; co-Director of BEYOND, Arizona State University; principle investigator, Center for the Convergence of Physical Sciences and Cancer Biology; Author, The Eerie Silence and The Cosmic Jackpot

Cancer is one of the most intensively studied phenomena in biology, yet mortality rates from the disease are little changed in decades. Perhaps that's because we are thinking about the problem in the wrong way.

A major impediment to progress is the deep entrenchment of a 50 year-old paradigm, the so-called somatic mutation theory. It goes like this. A somatic cell serially accumulates genetic damage, eventually reaching a point at which it decouples from the organism's regulatory systems and embarks on its own agenda.

Cancer cells acquire a range of distinctive hallmarks—unfettered proliferation, evasion of apoptosis, motility and migratory powers, genomic rearrangements, epigenetic alterations, and changes in the mode of metabolism, chromatin architecture and elasticity (to mention a few)—that collectively confer remarkable robustness and survivability. In the standard picture, cancer, with all these attendant hallmarks, is considered to be re-invented de novo in each host organism: the result of a dream run of "lucky" genetic accidents. The gain of all these amazing fitness functions, co-located in the same neoplasm (population of new cells), over a period of as little as months or even weeks, is attributed to a sort of ultra-fast-paced Darwinian evolution going on in the body of the host organism. Unfortunately this theory, despite its simplicity and popular appeal, has only one successful prediction: that the administration of chemotherapeutic drugs is very likely to fail on account of the neoplasm's ability to rapidly evolve a resistant sub-population.

Armed with the somatic mutation paradigm, the research community has become fixated on the promise of sequencing technology, which enables genetic and epigenetic changes in cells to be measured on a vast scale. If cancer is caused by mutations, so the reasoning goes, then maybe subtle patterns can be teased out of petabytes of bewildering cancer sequencing data. If so, then the answer to cancer—perhaps even that elusive general-purpose cure—might be found by identifying common defects amid all that stunningly complex malfunctioning genetic machinery. Never has science offered a clearer example of a preoccupation with trees at the expense of the forest.

Stand back and take a hard, skeptical look at that forest. Cancer is widespread among multicellular organisms, afflicting mammals, birds, fish and reptiles. It clearly has deep evolutionary roots, probably stretching back over a billion years to the dawn of multicellularity. Indeed, it represents a breakdown of multi-celled cooperation. Unchecked, cancer follows a very predictable pattern of progression, usually spreading around the body and colonizing remote organs. It seems to be executing an efficient pre-loaded genetic and epigenetic program. Like a genie in a glass bottle, once it gets out it has a well-defined agenda. Many things can shatter the bottle, but the real culprit is the genie. The cancer research community, unfortunately, is preoccupied with seeking mostly irrelevant patterns amid the random shards of glass while ignoring the genie.

Why are our cells harboring such dangerous genies? The answer has been known for a long time, but it is mostly shrugged aside. The same genes that are active in cancer are also active in early embryogenesis (even in gametogenesis), and to some extent in wound-healing and tissue regeneration. These ancient genes are deeply-embedded and well-protected in our genomes. They run the core functionality of cells. Top of the functionality list is the ability to proliferate—the most fundamental modality of living organisms, with nearly 4 billion years of evolutionary refinement behind it. Cancer seems to be the default state of cells that are stressed or insulted in some way, such as by aging tissue architecture or carcinogenic chemicals, with tumors representing a reversion to an ancestral phonotype.

In biology, few things are black or white. The somatic mutation paradigm is undeniably of some relevance to cancer, and sequencing data is certainly not useless. Indeed, it could prove a gold mine if only the research community comes to interpret that data in the right way. But the narrow focus of current cancer research is a serious obstacle to progress. Cancer will be understood properly only by positioning it within the great sweep of evolutionary history. 

alan_alda's picture
Actor; Writer; Director; Host, PBS program Brains on Trial; Author, Things I Overheard While Talking to Myself

The idea that things are either true or false should possibly take a rest.

I'm not a scientist, just a lover of science, so I might be speaking out of turn—but like all lovers I think about my beloved a lot. I want her to be free and productive, and not misunderstood.

For me, the trouble with truth is that not only is the notion of eternal, universal truth highly questionable, but simple, local truths are subject to refinement as well. Up is up and down is down, of course. Except under special circumstances. Is the North Pole up and the South Pole down? Is someone standing at one of the poles right-side up or upside-down? Kind of depends on your perspective.

When I studied how to think in school I was taught that the first rule of logic was that a thing can not both be and not be at the same time and in the same respect. That last note, "in the same respect," says a lot. As soon as you change the frame of reference, you've changed the truthiness of a once immutable fact.   

Death seems pretty definite. The body is just a lump. Life is gone. But if you step back a bit, the body is actually in a transitional phase while it slowly turns into compost—capable of living in another way.  

This is not to say that nothing is true or that everything is possible—just that it might not be so helpful for things to be known as true for all time, without a disclaimer. At the moment, the way it's presented to us, astrology is highly unlikely to be true. But if it turns out that organic stuff once bounced off Mars and hit earth with a dose of life, we might have to revise some statements that planets do not influence our lives here on earth.

I wonder, and this is just a modest proposal, if scientific truth should be identified in a way that acknowledges that it's something we know and understand for now and in a certain way.

One of the major ways the public comes to mistrust science is when they feel that scientists can't make up their minds. One says red wine is good for you, and another says even in small amounts it can be harmful. In turn, some people think science is just another belief system.

Scientists and science writers make a real effort to deal with this all the time. The phrase, "Current research suggests…" warns us that it's not a fact yet. But, from time to time the full blown factualness of something is declared, even though further work could place it within a new frame of reference. And then the public might wonder if the scientists are just arguing for their pet ideas.

Facts, it seems to me are workable units, useful in a given frame or context. They should be as exact and irrefutable as possible, tested by experiment to the fullest extent. When the frame changes, they don't need to be discarded as untrue, but respected as still useful within their domain.  I think most people who work with facts accept this, but I don't think the public fully gets it.

That's why I hope for more wariness about implying we know something to be true or false for all time and for everywhere in the cosmos.

Especially, if we happen to be upside down when we say it.

dan_sperber's picture
Social and Cognitive Scientist; CEU Budapest and CNRS Paris; Co-author (with Deirdre Wilson), Meaning and Relevance; and (with Hugo Mercier), The Enigma of Reason

What is meaning? There are dozens of theories. I suspect however that little would be lost if most of them were retired and the others quarantined until we have had a serious conversation as to why we need a theory of meaning in the first place. Today I am nominating for retirement just the standard approach to meaning found in the study of language and communication.

There, "meaning" is used to talk about (1) what linguistic items such as words and sentences mean, and (2) what speakers mean. Linguistic meanings and speakers' meanings are quite different things. To know a word is to know what its meaning or meanings (if it is ambiguous) are. You acquire this knowledge when you learn to speak a language. You also acquire the ability to construct the meaning of a sentence on the basis of the syntax. The meanings of words and the contribution of the syntax to the meaning of sentences are relatively stable linguistic properties that vary over historical time and across dialects.

A speaker's meaning on the other hand is a component of an individual intention to modify the beliefs or attitudes of other people through communication.

What justifies, or so it seems, using the same word 'meaning' for these two quite different kinds of phenomena—a linguistic-community-wide stable feature of a language vs. an aspect of a social interaction—is a simple and powerful dogma that purports to explain how a speaker manages to convey her meaning to her audience. She does so, we are told, by producing a sentence the linguistic meaning of which matches her speaker's meaning. The job of the addressee then is just to decode.

Alas, this simple and powerful account of how we use linguistic meanings to convey our speakers' meanings is not true. This much is actually obvious to all students of language. The issue is: how far is this from the truth?

Take an ordinary sentence, say, "She went". Your competence as an English speaker provides you with all the knowledge of that sentence meaning that you need to make use of it either in speaking or in comprehension. This however does not come near to telling you what a speaker who utters this sentence on a given occasion might mean. She might mean that Susan Jones had gone home, that the cat had one day left the house and had never returned, or that the RMS Queen Mary 2 had just left the harbor. She might mean that the neighbor carried out her threat to go to the police; or, ironically, that her interlocutor had been a fool to imagine that their neighbor would carry out that threat. She might mean metaphorically that Nancy Smith had, at some point, wholly ceased paying attention. And so forth. None of these meanings is fully encoded by the sentence; some are not even partially encoded. That much is true not just of "She went" but also of the vast majority of English sentences (arguably of all of them). Linguists and philosophers are aware of this general mismatch between linguistic and speaker's meaning, but most of them treat it as if it were a complication of limited relevance that can be idealized away or left to be investigated by pragmatics, a marginal subfield of linguistics.

The dogma, then, comes with an annotation: the basic coding-decoding mechanism that makes communication possible is quite cumbersome. Using it involves being wholly explicit. Luckily, there is a shortcut: you can avoid the verbosity of full explicitness and rely on your audience to infer rather than decode at least part of your meaning (or all of it if you use, for instance, a novel metaphor).

There are two problems with this dogma. The first is that the alleged basic mechanism is never used. You never fully encode your meaning. Often, you don't encode it at all. The second problem is that, if we are easily able to infer a speaker's meaning from an utterance that does not actually encode it, then why, in the first place, do we need the alleged basic encoding-decoding mechanism that is so unwieldy?

Imagine a tribe where people who want to go from their valley to the sea always follow a well-trodden path across a low mountain pass. According to the tribe's sages however, this path is just a shortcut and the real way (without which there couldn't even have been a shortcut) is a majestic road that goes straight up to the top of the mountain and then straight down to the sea. Nobody has ever seen that road, let alone travelled it, but it has been so much talked about that everybody can visualize it and marvel at the sages' wisdom. Linguistics and philosophy are the home of many such sages.

Most of the time, semanticists start from the dogma I have just criticized. They provide elaborate, often formal analyses of linguistic meanings that match the contents of our conscious thoughts. Are linguistic meanings really like this? Only a minority of researchers is exploring the idea that they might be a very different kind of mental objects. Unlike beliefs and intentions, linguistic 'meanings' may be just as inaccessible to untutored consciousness as are syntactic properties. They must, on the other hand, be the right kind of objects to serve as input to the unconscious inferences that achieve comprehension.

Pragmaticists and psycholinguists should, for their part, acknowledge that the meanings actually conveyed by our utterances may be not at all like individual sentences written in our minds in the 'language of thought', but rather like partly clear, partly vague reverberating changes in our cognitive environment.

The old dogma that linguistic meanings and speakers' meaning match denies or discounts a blatant gap. This gap is filled by intense cognitive activity of a specifically human kind. Let's retire the dogma and better explore the gap.

neil_gershenfeld's picture
Physicist, Director, MIT's Center for Bits and Atoms; Co-author, Designing Reality

Computer Science is a curious sort of science, one that implicitly ignores, and even explicitly opposes, the principles of the rest of science.

There are many models of computation: imperative versus declarative versus functional languages, SISD versus SIMD versus MIMD architectures, scalar versus vector versus multicore processors, RISC versus CISC versus VLIW instruction sets. But there is only one underlying physical reality: a patch of space can contain states, which can interact, and take time to transit. Anything else is a fiction.

Heroic efforts are now going into maintaining that fiction. Programming today is a bit like inhabiting the pleasure gardens in Metropolis, confident that the workers in the machine rooms down below will follow your instructions. Interconnect bottlenecks, cache misses, thread concurrency, data center power budgets, and the inefficiency of parallel processors (and programmers) are rumblings of discontent from below.

Software doesn't have physical units like time and space, but the hardware that executes it does. The code for an application program, the executable code that it's compiled to, and the circuits that run it don't look at all like each other. When a map is zoomed there's also a hierarchical structure from city to state to country, but the geometry of the representation is not changed. Why do we do that for software?

I blame two people for this state of affairs: Alan Turing and John von Neumann. They're famous for what were essentially historically important hacks. Turing was interested in the question of what was computable. His namesake machine was meant to be a theoretical model, not an experimental prescription. It had a head that read and wrote symbols stored on a tape. While that might sound straightforward, it's an unphysical distinction: persistence and interaction are both properties of a physical state. This segregation of function was elaborated in the organs of von Neumann's architecture. Even though that underpins most every computer made today, it was not intended to be a universal truth. Rather, it was articulated in an influential report that von Neumann wrote on programming within the very limited confines of an early computer, the EDVAC.

Turing and von Neumann understood the limits of their models; late in life they both studied computing in spatial structures, pattern formation for Turing and self-replication for von Neumann. But their legacy lives on in the instruction pointer in most any processor, the modern descendant of Turing's head reading a tape. All of the other instructions not pointed to consume information processing resources, but don't process information.

In nature, everything happens everywhere all the time. While an industry has developed devices for computation, a much smaller community has studied the physics of computation. Outside of what is traditionally considered to be computer science, they've developed quantum computers that use entanglement and superposition, microfluidic logic that transports material as well as information, analog logic that solves digital problems with continuous device degrees of freedom, and digital fabrication to code construction of programmable materials. Most importantly, programming models are emerging that represent and respect physical resources, rather than viewing them as a can to be kicked to someone else to worry about. It's turning out that this is easier rather than harder to do, because it avoids all of the issues of converting from an unphysical to a physical world.

In the movie The Matrix, Neo is given a choice between a red pill to exit the fictional world he's been inhabiting, or a blue pill to maintain the illusion. What he found when he got out was much messier, but ultimately much more satisfying. There's a similar choice now before the digital world, between avoiding or embracing the physical reality that it inhabits.

Think of Turing's machine and von Neumann's architecture as technological training wheels. They've given us a good ride, but something of a do-over is now needed to introduce physical units into software in order to be able to program the ultimate universal computer, the universe.

What is meaning? There are dozens of theories. I suspect however that little would be lost if most of them were retired and the others quarantined until we have had a serious conversation as to why we need a theory of meaning in the first place. Today I am nominating for retirement just the standard approach to meaning found in the study of language and communication.

There, "meaning" is used to talk about (1) what linguistic items such as words and sentences mean, and (2) what speakers mean. Linguistic meanings and speakers' meanings are quite different things. To know a word is to know what its meaning or meanings (if it is ambiguous) are. You acquire this knowledge when you learn to speak a language. You also acquire the ability to construct the meaning of a sentence on the basis of the syntax. The meanings of words and the contribution of the syntax to the meaning of sentences are relatively stable linguistic properties that vary over historical time and across dialects.

A speaker's meaning on the other hand is a component of an individual intention to modify the beliefs or attitudes of other people through communication.

What justifies, or so it seems, using the same word 'meaning' for these two quite different kinds of phenomena—a linguistic-community-wide stable feature of a language vs. an aspect of a social interaction—is a simple and powerful dogma that purports to explain how a speaker manages to convey her meaning to her audience. She does so, we are told, by producing a sentence the linguistic meaning of which matches her speaker's meaning. The job of the addressee then is just to decode.

Alas, this simple and powerful account of how we use linguistic meanings to convey our speakers' meanings is not true. This much is actually obvious to all students of language. The issue is: how far is this from the truth?

Take an ordinary sentence, say, "She went". Your competence as an English speaker provides you with all the knowledge of that sentence meaning that you need to make use of it either in speaking or in comprehension. This however does not come near to telling you what a speaker who utters this sentence on a given occasion might mean. She might mean that Susan Jones had gone home, that the cat had one day left the house and had never returned, or that the RMS Queen Mary 2 had just left the harbor. She might mean that the neighbor carried out her threat to go to the police; or, ironically, that her interlocutor had been a fool to imagine that their neighbor would carry out that threat. She might mean metaphorically that Nancy Smith had, at some point, wholly ceased paying attention. And so forth. None of these meanings is fully encoded by the sentence; some are not even partially encoded. That much is true not just of "She went" but also of the vast majority of English sentences (arguably of all of them). Linguists and philosophers are aware of this general mismatch between linguistic and speaker's meaning, but most of them treat it as if it were a complication of limited relevance that can be idealized away or left to be investigated by pragmatics, a marginal subfield of linguistics.

The dogma, then, comes with an annotation: the basic coding-decoding mechanism that makes communication possible is quite cumbersome. Using it involves being wholly explicit. Luckily, there is a shortcut: you can avoid the verbosity of full explicitness and rely on your audience to infer rather than decode at least part of your meaning (or all of it if you use, for instance, a novel metaphor).

There are two problems with this dogma. The first is that the alleged basic mechanism is never used. You never fully encode your meaning. Often, you don't encode it at all. The second problem is that, if we are easily able to infer a speaker's meaning from an utterance that does not actually encode it, then why, in the first place, do we need the alleged basic encoding-decoding mechanism that is so unwieldy?

Imagine a tribe where people who want to go from their valley to the sea always follow a well-trodden path across a low mountain pass. According to the tribe's sages however, this path is just a shortcut and the real way (without which there couldn't even have been a shortcut) is a majestic road that goes straight up to the top of the mountain and then straight down to the sea. Nobody has ever seen that road, let alone travelled it, but it has been so much talked about that everybody can visualize it and marvel at the sages' wisdom. Linguistics and philosophy are the home of many such sages.

Most of the time, semanticists start from the dogma I have just criticized. They provide elaborate, often formal analyses of linguistic meanings that match the contents of our conscious thoughts. Are linguistic meanings really like this? Only a minority of researchers is exploring the idea that they might be a very different kind of mental objects. Unlike beliefs and intentions, linguistic 'meanings' may be just as inaccessible to untutored consciousness as are syntactic properties. They must, on the other hand, be the right kind of objects to serve as input to the unconscious inferences that achieve comprehension.

Pragmaticists and psycholinguists should, for their part, acknowledge that the meanings actually conveyed by our utterances may be not at all like individual sentences written in our minds in the 'language of thought', but rather like partly clear, partly vague reverberating changes in our cognitive environment.

The old dogma that linguistic meanings and speakers' meaning match denies or discounts a blatant gap. This gap is filled by intense cognitive activity of a specifically human kind. Let's retire the dogma and better explore the gap.

lawrence_m_krauss's picture
Theoretical Physicist; Foundation Professor, School of Earth and Space Exploration and Physics Department, ASU; Author, The Greatest Story Ever Told . . . So Far

Einstein once said: "The question that most interests me is whether God had any choice in the creation of the universe." By 'God', of course, he didn't mean God. What he was referring to was the question that has driven most scientists who, like me, are attempting to unravel the fundamental laws governing the cosmos at its most basic scale. Namely: Is there only one consistent set of physical laws? If we change one fundamental constant, one force law, would the whole edifice tumble?

Most scientists of my generation, like Einstein before us, implicitly assumed that the answer to these questions was, yes. We wanted to uncover the 'One True Theory', the mathematical formulation that explained why there had to be four forces in nature, why the proton is 2000 times heavier than the electron, etc. Historically in recent memory this effort reached its most audacious level in the 1980's, when string theorists argued that they had found the Theory of Everything—that using the postulates of string theory one would be driven to a unique physical theory, with no wiggle room, that would ultimately explain everything we see at a fundamental level.

Needless to say, that grand notion has had to be put aside for now, as string theory has failed, thus far at least, to live up to such lofty promises. In the process, however, in part driven by string theory's lack of success, we have been driven to the opposite alternative: the laws of nature we measure may be totally accidental, local to our environment (namely our Universe), not prescribed with robustness by any universal principle, and by no means generic or required.

String theory, for example, suggests a host of new possible dimensions and to make contact with our observed four-dimensional universe, it requires the other dimensions to be invisible, either by curling up on such small scales they cannot be probed, or by requiring the known forces and particles to be restricted to live on our 4 dimensional 'brane'. But, it appears that there are many, many different ways to hide the extra dimensions, and each one produces a different four-dimensional universe with different laws. It also suggests that four dimensions themselves need not be universal. Perhaps there are 2 dimensional universes, or six dimensional ones.

One does not have to go to such speculative heights to be driven to the possibility that the laws of our universe may have come into existence when our universe did. The theory of Inflation, which is currently the best theory for how our current universe obtained the characteristics it is measured to have on large scales, suggests that at very early times there was a runaway period of expansion. In different places, and perhaps different times, small regions will stop 'inflating', as a cosmic phase transition occurs in those regions, changing the stable configuration of particles and fields. But in this picture, most of the 'metaverse', if you will, is still inflating, and each region that departs from inflation, each universe, can settle into a different state, with different laws, just as ice crystals on a window can form in different directions.

All of this suggests very strongly that there may be nothing fundamental whatsoever about the 'fundamental' laws we measure in our universe. They could simply be accidental. Physics becomes, in this sense, an environmental science.

Now, many people have picked up on this notion to suggest that somehow we can understand our laws because they are selected anthropically—that is, if they were any different, life wouldn't have developed in our universe. However, this idea is full of problems. Not least because we have no idea what possibilities exist, and whether changing a few, or a huge number of fundamental parameters could result in viable habitable universes. We also have no idea if we are 'typical' lifeforms. Most life that evolves or will in our universe in the future might be quite different.

Focusing on anthropics misses the point in any case. The important fact is that we must be willing to give up the idea that the laws of physics in our universe reflect some underlying fundamental order… that the laws are somehow pre-ordained by principles of beauty or symmetry. There is nothing new about this. It was myopic to assume that life on our planet was pre-ordained. We now understand that accidents of natural selection and environmental traumas governed the history of life that led to our existence. It is equally myopic to assume that we are somehow the pinnacles of evolution—that all roads lead to us, or that we will not lead to something completely different in the future.

It is myopic to assume that the universe we now live in will always be this way. It won't be. As several of us have argued, it seems that in the far future all the galaxies we now see will disappear. But it may be much worse. It is myopic to assume our laws are universal in time and space even in our Universe. Current data on the Higgs particle suggests that the Universe could yet again undergo a cosmic phase transition that would change the stable forces and particles, and we and everything we see might disappear.

We have come to accept the notion that life is not preordained. We need to equally give up the quaint notion that the laws of physics are. Cosmic accidents are everywhere. It is quite possible that our entire universe is just another one. 

matt_ridley's picture
Science Writer; Fellow, Royal Society of Literature and the Academy of Medical Sciences; Author,The Evolution of Everything

T. Robert Malthus (he used his middle name) thought population must outstrip food supply and "therefore we should facilitate, instead of foolishly and vainly endeavouring to impede," disease, hunger and war. We should "court the return of the plague" and "particularly encourage settlements in all marshy and unwholesome situations". This nasty idea—that you had to be cruel to be kind to prevent population growing too fast for food supply—directly influenced heartless policy in colonial Ireland, British India, imperial Germany, eugenic California, Nazi Europe, Lyndon Johnson's aid to India and Deng Xiaoping's China. It was encountering a Malthusian tract, The Limits to Growth, that led Song Jian to recommend a one-child policy to Deng. The Malthusian misanthropic itch is still around and far too common in science.

Yet Malthus and his followers were wrong, wrong, wrong. Not just because they were unlucky that the world turned out nicer than they thought; that keeping babies alive proved a better way of getting birth rates down than encouraging them to die; not just because technology came to the rescue; but because Malthusians have repeatedly made the mistake of thinking of resources as static, finite things that would "run out". They thought growth meant using up a fixed heap of land, metals, water, nitrogen, phosphate, oil, and so forth. They thought the birth of a calf was a good thing because it added to the world's resources, but the birth of a baby was a bad thing because it added to the mouths to feed.

This completely misunderstood the nature of a resource, which only becomes a resource thanks to human ingenuity. So uranium oxide is not a resource before nuclear power. Shale oil was not a resource till horizontal fracking. Steel was not easily recyclable till the electric-arc furnace. Nitrogen in the air was not a resource till the Haber process. The productivity of land was transformed by fertiliser so globally we now use 65% less land to produce the same amount of food as 50 years ago. And a baby is a resource too: a brain as well as a mouth.

The few economists, such as Julian Simon and Bjorn Lomborg, who tried to point this out to the Malthusian scientists, and who argued that economic growth was not the cumulative use of resources but the increase of productivity—doing more with less—were called imbeciles or had pies thrown in their faces for their trouble. But they were right again and again, as population and prosperity grew together to levels that the Malthusians kept saying were impossible.

"It is unrealistic to suppose that there will be increases in agricultural production adequate to meet forecast demands for food, said a long list of scientific stars in a British book called A Blueprint for Survival in 1972. "Farmers can no longer keep up with rising demand for food and famine is inevitable," said Lester Brown in 1974. (World food production has since doubled and famine is largely history—except where dictators create it).

World population will almost certainly cease to grow before the end of the century; peak farmland is very close if not already past; electric cars driven by nuclear power stations are to all intents and purposes an infinite resource. The world is a dynamic, reflexive place in which change is all. Time to retire the static mistakes of misanthropic, myopic, mathematical Parson Malthus because he never was and never will be right.

a_c_grayling's picture
Master of the New College of the Humanities; Supernumerary Fellow, St Anne's College, Oxford; Author, War: An Enquiry

When two hypotheses are equally adequate to the data, and equal in predictive power, extra-theoretical criteria for choosing between them might come into play. They include not just questions about best fit with other hypotheses or theories already predicated to enquiry, but the aesthetic qualities of the competing hypotheses themselves—which is more pleasing, more elegant, more beautiful?—and of course the question of which of them is simpler.

Simplicity is a desideratum in science, and the quest for it is a driver in the task of effecting reductions of complex phenomena to their components. It lies behind the assumption that there must be a single force in nature, of which the gravitational, electroweak and strong nuclear forces are merely manifestations; and this assumption in turn is an instance of the general view that there might ultimately be a single kind of thing (or stuff or field or as-yet-undreamt phenomenon) out of which variety springs by means of principles themselves fundamental and simple.

Compelling as the idea of simplicity is, there is no guarantee that nature itself has as much interest in simplicity as those attempting to describe it. If the idea of emergent properties still has purchase, biological entities cannot be fully explained except in terms of them, which means in their full complexity, even though considerations of structure and composition are indispensable.

Two measures of complexity are: the length of the message required to describe a given phenomenon, and the length of the evolutionary history of that phenomenon. On a certain view, that makes a Jackson Pollock painting complex by the first measure, simple by the second; while a smooth pebble on a beach is simple by the first and complex by the second. The simplicity sought in science might be thought to be what is achieved by reducing the length of the descriptive message: encapsulation in an equation, for example. But: could there be an inverse relationship between the degree of simplicity achieved and the degree of approximation that results?

Of course it would be nice if everything in the end turned out to be simple, or could be made amenable to simple description. But some things might be better or more adequately explained in their complexity—biological systems again come to mind. Resisting too dissipative a form of reductionism there might ward off those silly kinds of criticism claiming that science aims to see nothing in the pearl but the disease of the oyster.

sam_harris's picture
Neuroscientist; Philosopher; Author, Making Sense

Search your mind, or pay attention to the conversations you have with other people, and you will discover that there are no real boundaries between science and philosophy—or between those disciplines and any other that attempts to make valid claims about the world on the basis of evidence and logic. When such claims and their methods of verification admit of experiment and/or mathematical description, we tend to say that our concerns are "scientific"; when they relate to matters more abstract, or to the consistency of our thinking itself, we often say that we are being "philosophical"; when we merely want to know how people behaved in the past, we dub our interests "historical" or "journalistic"; and when a person's commitment to evidence and logic grows dangerously thin or simply snaps under the burden of fear, wishful thinking, tribalism, or ecstasy, we recognize that he is being "religious."

The boundaries between true intellectual disciplines are currently enforced by little more than university budgets and architecture. Is the Shroud of Turin a medieval forgery? This is a question of history, of course, and of archaeology, but the techniques of radiocarbon dating make it a question of chemistry and physics as well. The real distinction we should care about—the observation of which is the sine qua non of the scientific attitude—is between demanding good reasons for what one believes and being satisfied with bad ones.

The scientific attitude can handle whatever happens to be the case. Indeed, if the evidence for the inerrancy of the Bible and the resurrection of Jesus Christ were good, one could embrace the doctrine of fundamentalist Christianity scientifically. The problem, of course, is that the evidence is either terrible or nonexistent—hence the partition we have erected (in practice, never in principle) between science and religion.

Confusion on this point has spawned many strange ideas about the nature of human knowledge and the limits of "science." People who fear the encroachment of the scientific attitude—especially those who insist upon the dignity of believing in one or another Iron Age god—will often make derogatory use of words such as materialism, neo-Darwinism, and reductionism, as if those doctrines had some necessary connection to science itself.

There are, of course, good reasons for scientists to be materialist, neo-Darwinian, and reductionist. However, science entails none of those commitments, nor do they entail one another. If there were evidence for dualism (immaterial souls, reincarnation), one could be a scientist without being a materialist. As it happens, the evidence here is extraordinarily thin, so virtually all scientists are materialists of some sort. If there were evidence against evolution by natural selection, one could be a scientific materialist without being a neo-Darwinist. But as it happens, the general framework put forward by Darwin is as well established as any other in science. If there were evidence that complex systems produced phenomena that cannot be understood in terms of their constituent parts, it would be possible to be a neo-Darwinist without being a reductionist. For all practical purposes, that is where most scientists find themselves, because every branch of science beyond physics must resort to concepts that cannot be understood merely in terms of particles and fields. Many of us have had "philosophical" debates about what to make of this explanatory impasse. Does the fact that we cannot predict the behavior of chickens or fledgling democracies on the basis of quantum mechanics mean that those higher-level phenomena are something other than their underlying physics? I would vote "no" here, but that doesn't mean I envision a time when we will use only the nouns and verbs of physics to describe the world. 

But even if one thinks that the human mind is entirely the product of physics, the reality of consciousness becomes no less wondrous, and the difference between happiness and suffering no less important. Nor does such a view suggest that we will ever find the emergence of mind from matter fully intelligible; consciousness may always seem like a miracle. In philosophical circles, this is known as "the hard problem of consciousness"—some of us agree that this problem exists, some of us don't. Should consciousness prove conceptually irreducible, remaining the mysterious ground for all we can conceivably experience or value, the rest of the scientific worldview would remain perfectly intact.

The remedy for all this confusion is simple: We must abandon the idea that science is distinct from the rest of human rationality. When you are adhering to the highest standards of logic and evidence, you are thinking scientifically. And when you're not, you're not. 

lee_smolin's picture
Physicist, Perimeter Institute; Author, Einstein's Unfinished Revolution

In my field of fundamental physics and cosmology the idea most ready for retirement is that the big bang was the first moment of time.

In popular parlance the big bang has two meanings. First, big bang cosmology is the hypothesis that our universe has been expanding for 13.7 billion years from an extremely hot and dense primordial state-more extreme than the centre of a star or indeed anywhere now existing. This I have no quarrel with-it is established scientific fact which has been elaborated into a detailed story which narrates the expansion of the universe from an extremely uniform and dense hot plasma to the beautifully varied and complex world that is our home. We have detailed theories which pass numerous observational tests which explain the origins of all the structures we see from the elements to galaxies, stars, planets and the molecular building blocks of life itself. As in any good scientific theory there are questions still to be answered, such as the precise nature of the dark matter and dark energy which are prominent actors in the story, or the very interesting question of whether there was a very early phase of inflationary exponential expansion, but these do not suggest the basic picture could be wrong.

What concerns me is the other meaning of the big bang, which is the further hypothesis that the ultimate origin of our universe was a first moment of time at which our universe was launched from a state of infinite density and temperature. According to this idea, all that exists or has ever existed is 13.7 billion years old. It makes no sense to ask what was before that because, before that, there was not even time.

The main problem with this second meaning of big bang is that it is not very successful as a scientific hypothesis because it leaves big questions about the universe unanswered. It turns out that our universe has to have started off in an extraordinarily special state for the universe to evolve to anything like our universe. The hypothesis that there was a first moment of time turns out to be remarkably generic and unconstraining as it is consistent with an infinite number of possible states in which the universe might have started out. This is due to a theorem proved by Hawking and Penrose, that almost any expanding universe described by general relativity has such a first moment of time. Compared to almost all of these, our own early universe was extraordinarily homogeneous and symmetric. Why? If the big bang was the first moment of time there can be no scientific answer because there was no before on which to base an explanation. At this point theologians see their opening and indeed have been lining up at the gates of science to impose their kind of explanation-that god made the universe and made it so.

Similarly, if the big bang was the first moment of time there can be no scientific answer to the question of what chose the laws of nature. This leaves the field open to explanations such as the anthropic multiverse which are unscientific because they call on unobservable collections of other universes and make no predictions by which their hypotheses might be tested and falsified.

There is however a chance for science to answer these questions, which is if the big bang was not the first moment of time, but rather a transition from an earlier era of the universe-an era that can be investigated scientifically because processes acting then, through the time before the big bang, gave rise to our world.

For there to have been a time before the big bang, the Hawking-Penrose theorem must fail. But there is a simple reason to think it must, which is that general relativity is incomplete as a description of nature because it leaves out quantum phenomena. Unifying quantum physics with general relativity has been a major challenge for fundamental physics, one on which there has been much progress in the last thirty years. In spite of the absence of a still definitive solution to the problem, there is robust evidence from quantum cosmology models that the infinite singularities that force time to stop in general relativity are eliminated, turning the big bang-in the sense of a first moment of time-into a big bounce, which allows time to continue to exist before the big bang, deep into the past. Detailed models of quantum universes show a prior era ending with a collapse, where the density increases to very high values but, before the universe becomes infinitely dense, quantum processes take over which bounce the collapse into an expansion-launching a new era that could be our expanding universe.

There are presently several scenarios under study for what happened in the era before the big bang and how it transitioned to our expanding universes. Two hypothesize a quantum bounce-and go under the name of loop quantum cosmology and geometrogenesis. Two others-due respectively to Roger Penrose and Paul Steinhardt with Neil Turok, describe cyclic scenarios in which universes die giving rise to new universes. A fifth posits that new universes are launched when quantum effects bounce black hole singularities These scenarios offer insights as to how the laws of nature that govern our universe might have been chosen, and may explain also how the initial conditions of our universe evolved from the previous universe. The important thing is that each of these hypotheses make predictions for real, doable observations by which they might be falsified, and distinguished from each other.

During the Twentieth Century we learned a great deal about the first three minutes of our expanding universe (in Steven Weinberg's phrase). During this century we can look forward to gaining scientific evidence of the last three minutes of the era before ours, and learning how physics before the big bang gave rise to the birth of our universe. 

richard_dawkins's picture
Evolutionary Biologist; Emeritus Professor of the Public Understanding of Science, Oxford; Author, Books Do Furnish a Life

Essentialism—what I’ve called "the tyranny of the discontinuous mind"—stems from Plato, with his characteristically Greek geometer’s view of things. For Plato, a circle, or a right triangle, were ideal forms, definable mathematically but never realised in practice. A circle drawn in the sand was an imperfect approximation to the ideal Platonic circle hanging in some abstract space. That works for geometric shapes like circles, but essentialism has been applied to living things and Ernst Mayr blamed this for humanity’s late discovery of evolution—as late as the nineteenth century. If, like Aristotle, you treat all flesh-and-blood rabbits as imperfect approximations to an ideal Platonic rabbit, it won’t occur to you that rabbits might have evolved from a non-rabbit ancestor, and might evolve into a non-rabbit descendant. If you think, following the dictionary definition of essentialism, that the essence of rabbitness is "prior to" the existence of rabbits (whatever "prior to" might mean, and that’s a nonsense in itself) evolution is not an idea that will spring readily to your mind, and you may resist when somebody else suggests it.

Paleontologists will argue passionately about whether a particular fossil is, say, Australopithecus or Homo. But any evolutionist knows there must have existed individuals who were exactly intermediate. It’s essentialist folly to insist on the necessity of shoehorning your fossil into one genus or the other. There never was an Australopithecus mother who gave birth to a Homo child, for every child ever born belonged to the same species as its mother. The whole system of labelling species with discontinuous names is geared to a time slice, the present, in which ancestors have been conveniently expunged from our awareness (and "ring species" tactfully ignored). If by some miracle every ancestor were preserved as a fossil, discontinuous naming would be impossible. Creationists are misguidedly fond of citing "gaps" as embarrassing for evolutionists, but gaps are a fortuitous boon for taxonomists who, with good reason, want to give species discrete names. Quarrelling about whether a fossil is "really" Australopithecus or Homo is like quarrelling over whether George should be called "tall". He’s five foot ten, doesn’t that tell you what you need to know?

Essentialism rears its ugly head in racial terminology. The majority of "African Americans" are of mixed race. Yet so entrenched is our essentialist mind-set, American official forms require everyone to tick one race/ethnicity box or another: no room for intermediates. A different but also pernicious point is that a person will be called "African American" even if only, say, one of his eight great grandparents was of African descent. As Lionel Tiger put it to me, we have here a reprehensible "contamination metaphor." But I mainly want to call attention to our society’s essentialist determination to dragoon a person into one discrete category or another. We seem ill-equipped to deal mentally with a continuous spectrum of intermediates. We are still infected with the plague of Plato’s essentialism.

Moral controversies such as those over abortion and euthanasia are riddled with the same infection. At what point is a brain-dead accident-victim defined as "dead"? At what moment during development does an embryo become a "person"? Only a mind infected with essentialism would ask such questions. An embryo develops gradually from single-celled zygote to newborn baby, and there’s no one instant when "personhood" should be deemed to have arrived. The world is divided into those who get this truth and those who wail, "But there has to be some moment when the fetus becomes human." No, there really doesn’t, any more than there has to be a day when a middle aged person becomes old. It would be better—though still not ideal—to say the embryo goes through stages of being a quarter human, half human, three quarters human . . . The essentialist mind will recoil from such language and accuse me of all manner of horrors for denying the essence of humanness.

Evolution too, like embryonic development, is gradual. Every one of our ancestors, back to the common root we share with chimpanzees and beyond, belonged to the same species as its own parents and its own children. And likewise for the ancestors of a chimpanzee, back to the same shared progenitor. We are linked to modern chimpanzees by a V-shaped chain of individuals who once lived and breathed and reproduced, each link in the chain being a member of the same species as its neighbours in the chain, no matter that taxonomists insist on dividing them at convenient points and thrusting discontinuous labels upon them. If all the intermediates, down both forks of the V from the shared ancestor, had happened to survive, moralists would have to abandon their essentialist, "speciesist" habit of placing Homo sapiens on a sacred plinth, infinitely separate from all other species. Abortion would no more be "murder" than killing a chimpanzee—or, by extension, any animal. Indeed an early-stage human embryo, with no nervous system and presumably lacking pain and fear, might defensibly be afforded less moral protection than an adult pig, which is clearly well equipped to suffer. Our essentialist urge toward rigid definitions of "human" (in debates over abortion and animal rights) and "alive" (in debates over euthanasia and end-of-life decisions) makes no sense in the light of evolution and other gradualistic phenomena.

We define a poverty "line": you are either "above" or "below" it. But poverty is a continuum. Why not say, in dollar-equivalents, how poor you actually are? The preposterous Electoral College system in US presidential elections is another, and especially grievous, manifestation of essentialist thinking. Florida must go either wholly Republican or wholly Democrat—all 25 Electoral College votes—even though the popular vote is a dead heat. But states should not be seen as essentially red or blue: they are mixtures in various proportions.

You can surely think of many other examples of "the dead hand of Plato"—essentialism. It is scientifically confused and morally pernicious. It needs to be retired.

timo_hannay's picture
Founding Managing Director, SchoolDash; Co-organizer, Sci Foo Camp

There are any numbers of scientific theories that ought to bite the dust; that's what happens when you work at the frontiers of human ignorance. But most of them are at worst minor distractions or intellectual detours that barely escape the cloisters of academe. A scientific misconception that truly deserves a bullet in the back of its head would be one that has escaped into the real world to do real damage there. Perhaps the best current example is the notion of nature versus nurture.

It is a beguiling concept: highly intuitive and expressible through an alliterative, almost poetic moniker. Francis Galton, who was the founder of eugenics, a polymath and the cousin of Charles Darwin, coined the term. Unfortunately, like Galton's other monumentally bad idea, "nature versus nurture" creates a corrosive blend of conceptual falsehood and political potency.

The most elementary error that people make in interpreting the effects of genes versus those of the environment is to assume that you can truly separate one from the other. Donald Hebb, the brilliant Canadian neuropsychologist, when asked whether nature or nurture contribute more to human personality, reportedly said, "Which contributes more to the area of a rectangle, its length or its width?"

This was a clever reply, but unfortunately only reinforced the highly misleading idea that genetics and environment are orthogonal concepts, like Newtonian space and time. In fact they're more like Einsteinian spacetime: deeply intertwined and with complex interactions that can give rise to counterintuitive results.

Of course, the experts already know this. They realise, for example, that most children inherit from their parents not only genes but also their environment. Hence studies of separated monozygotic twins (who share most of their genes but not their environments). In addition, the idea of the extended phenotype—in which organisms, driven by their genes, act to modify their environments—has been well understood for over 30 years. And the science of epigenetics, though still very much in progress, has already demonstrated a wide variety of ways in which a gene's effects can be altered by factors other than its nucleotide sequence, and shown that these are determined in large part by the gene's environment (which, of course, consists in part of other genes, both in the same organism and beyond).

Unfortunately most of this is lost on the people, such as journalists and politicians, who seek to shape our society. Almost all of them seem to retain a naive 'Newtonian' view of nature and nurture, and this leads them into all sorts of intellectual fallacies.

A case in point is the brouhaha that accompanied the release in October 2013 of a lengthy screed on education policy written by Dominic Cummings, then advisor to the UK's centre-right education minister. Among other things, he pointed out (correctly) that academic performance is highly heritable. This led many commentators, especially on the left, to equate his statement with the belief that education doesn't matter. In their 'Newtonian' nature-versus-nurture universes, the heritability of a trait is an immutable law that can leave people—worse still, children—as prisoners of their genes.

This is nonsense. Inheritability is not the inverse of mutability, and to say that the heritability of a trait is high is not to say that the environment has no effect because heritability scores are themselves affected by the environment. Take the case of height. In the rich world, the heritability of height is something like 80 per cent. But this is only because our nutrition is universally quite good. In places where malnutrition or starvation are common, environmental factors predominate and the heritability of height is much lower.

Similarly, a high heritability of academic performance is not necessarily a sign that education matters little. On the contrary, it is at least in part a product of modern universal schooling. Indeed, if every child received an identical education then the heritability of academic performance would necessarily rise to 100 per cent (because any differences could only be explained by genes). Looked at in this way, a high heritability of academic performance is not a right-wing belief but rather a left-wing aim. But try explaining that to a newspaper columnist on a deadline or a politician with an axe to grind. Ironically, a central thrust of Cummings' paper was to argue that the British education system has produced an inept political elite and commentariat that is oblivious to such technical subtleties. In criticising his comments they have merely proved him right.

Thus the misguided concept of 'nature versus nurture' causes apparently intelligent people to confuse egalitarianism for fascism, to misunderstand the consequences of their own policies, and hence to arrive at unfounded beliefs regarding the education of our children. The only form of evolutionary manipulation that makes sense here is a concerted effort to eliminate this outdated and misleading idea from the meme pool. 

There are any numbers of scientific theories that ought to bite the dust; that's what happens when you work at the frontiers of human ignorance. But most of them are at worst minor distractions or intellectual detours that barely escape the cloisters of academe. A scientific misconception that truly deserves a bullet in the back of its head would be one that has escaped into the real world to do real damage there. Perhaps the best current example is the notion of nature versus nurture.

It is a beguiling concept: highly intuitive and expressible through an alliterative, almost poetic moniker. Francis Galton, who was the founder of eugenics, a polymath and the cousin of Charles Darwin, coined the term. Unfortunately, like Galton's other monumentally bad idea, "nature versus nurture" creates a corrosive blend of conceptual falsehood and political potency.

The most elementary error that people make in interpreting the effects of genes versus those of the environment is to assume that you can truly separate one from the other. Donald Hebb, the brilliant Canadian neuropsychologist, when asked whether nature or nurture contribute more to human personality, reportedly said, "Which contributes more to the area of a rectangle, its length or its width?"

This was a clever reply, but unfortunately only reinforced the highly misleading idea that genetics and environment are orthogonal concepts, like Newtonian space and time. In fact they're more like Einsteinian spacetime: deeply intertwined and with complex interactions that can give rise to counterintuitive results.

Of course, the experts already know this. They realise, for example, that most children inherit from their parents not only genes but also their environment. Hence studies of separated monozygotic twins (who share most of their genes but not their environments). In addition, the idea of the extended phenotype—in which organisms, driven by their genes, act to modify their environments—has been well understood for over 30 years. And the science of epigenetics, though still very much in progress, has already demonstrated a wide variety of ways in which a gene's effects can be altered by factors other than its nucleotide sequence, and shown that these are determined in large part by the gene's environment (which, of course, consists in part of other genes, both in the same organism and beyond).

Unfortunately most of this is lost on the people, such as journalists and politicians, who seek to shape our society. Almost all of them seem to retain a naive 'Newtonian' view of nature and nurture, and this leads them into all sorts of intellectual fallacies.

A case in point is the brouhaha that accompanied the release in October 2013 of a lengthy screed on education policy written by Dominic Cummings, then advisor to the UK's centre-right education minister. Among other things, he pointed out (correctly) that academic performance is highly heritable. This led many commentators, especially on the left, to equate his statement with the belief that education doesn't matter. In their 'Newtonian' nature-versus-nurture universes, the heritability of a trait is an immutable law that can leave people—worse still, children—as prisoners of their genes.

This is nonsense. Inheritability is not the inverse of mutability, and to say that the heritability of a trait is high is not to say that the environment has no effect because heritability scores are themselves affected by the environment. Take the case of height. In the rich world, the heritability of height is something like 80 per cent. But this is only because our nutrition is universally quite good. In places where malnutrition or starvation are common, environmental factors predominate and the heritability of height is much lower.

Similarly, a high heritability of academic performance is not necessarily a sign that education matters little. On the contrary, it is at least in part a product of modern universal schooling. Indeed, if every child received an identical education then the heritability of academic performance would necessarily rise to 100 per cent (because any differences could only be explained by genes). Looked at in this way, a high heritability of academic performance is not a right-wing belief but rather a left-wing aim. But try explaining that to a newspaper columnist on a deadline or a politician with an axe to grind. Ironically, a central thrust of Cummings' paper was to argue that the British education system has produced an inept political elite and commentariat that is oblivious to such technical subtleties. In criticising his comments they have merely proved him right.

Thus the misguided concept of 'nature versus nurture' causes apparently intelligent people to confuse egalitarianism for fascism, to misunderstand the consequences of their own policies, and hence to arrive at unfounded beliefs regarding the education of our children. The only form of evolutionary manipulation that makes sense here is a concerted effort to eliminate this outdated and misleading idea from the meme pool. 

eric_topol's picture
Professor of Genomics, The Scripps Translational Science Institute; Author, The Patient Will See You Now

We were taught that the fertilized egg divides to ultimately yield a human being---recently estimated to have ~37 trillion cells—each with the same, authentic copy of one's genome. Unfortunately that simple, seemingly immutable archetype just got mutated.

While there started to be questioning of the classical teaching—one genome per individual—decades ago, it was only recently through our newfound capability of performing single cell sequencing and high-resolution array genomic hybridization that this was unequivocally debunked. For example, in 2012 it was reported that the brain cells from 59 women had Y-chromosomes in 63% of them. Many found that hard to accept. But recently researchers at the Salk Institute did single cell sequencing of post-mortem human brain neurons and found that a striking proportion of the cells (ranging up to 41%) had structural DNA variants. This level of so-called mosaicism in the brain was far greater than anticipated and brought up the question as to whether our single cell sequencing technology might have some flaws that account for the observation. That doesn't appear to be the case, however, as too many independent studies have come up with a similar finding, whether it is in the brain or other organs, such as skin, blood or the heart. This year, a group at Yale found that a high fraction of kids with congenital heart disease carried mutations not present in either parent, perhaps accounting for 10% of severe heart disease birth defects.

These spontaneous "de novo" mutations of cells in the course of one's life are a curve ball for geneticists who thought that heritability was a generation passed down story. More reports of sporadic disease keep popping up, attributable to these de novo mutations, such as amyotrophic lateral sclerosis (Lou Gehrig's disease), autism, and schizophrenia. The mutations can occur at many time points along the human lifespan. A sample of 14 aborted human embryos in development showed that 70% had major structural variations, even though this would not be representative of live births. At the other end of the time continuum, in 6 people who died, unrelated to cancer, there was extensive mosaicism across all organs assessed, including liver, small intestine and pancreas.

But we still don't know if this is merely of academic interest or has important disease-inducing impact. For sure the mosaicism that occurs later in life, in "terminally differentiated" cells, is known to be important in the development of cancer. And the mosaicism of immune cells, particularly lymphocytes, appears to be part of a healthy, competent immune system. Beyond this, it largely remains unclear as to the functional significance of each of us carrying multiple genomes.

The implications are potentially big. When we do use a blood sample to evaluate a person's genome, we have no clue about the potential mosaicism that exists throughout the individual's body. So a lot more work needs to be done to sort this out, and now that we have the technology to do it, we'll undoubtedly better understand our remarkable heterogeneous genomic selves in the years ahead.

ian_mcewan's picture
Novelist; Recipient, the Man Booker Prize for Fiction; Author, Sweet Tooth; Solar; On Chesil Beach; Nutshell; Machines Like Me; The Cockroach

Beware of arrogance! Retire nothing! A great and rich scientific tradition should hang onto everything it has. Truth is not the only measure. There are ways of being wrong that help others to be right. Some are wrong, but brilliantly so. Some are wrong but contribute to method. Some are wrong but help found a discipline. Aristotle ranged over the whole of human knowledge and was wrong about much. But his invention of zoology alone was priceless. Would you cast him aside? You never know when you might need an old idea. It could rise again one day to enhance a perspective the present cannot imagine. It would not be available to us if it were fully retired. Even Darwin in the early 20th century experienced some neglect, until the Modern Synthesis. The Expression of Emotion...' took longer to be current. William James also languished, as did psychology, once consciousness as a subject was retired from it. Look at the revived fortunes of Thomas Beyes and Adam Smith (especially 'The Theory of Moral Sentiments') We may need to take another look at the long-maligned Descartes. Epigenetics might even restore the reputation of Lamarck. Freud may yet have something to tell us about the unconscious.

Every last serious and systematic speculation about the world deserves to be preserved. We need to remember how we got to where we are, and we'd like the future not to retire us. Science should look to literature and maintain a vibrant living history as a monument to ingenuity and persistence. We won't retire Shakespeare. Nor should we Bacon.

alison_gopnik's picture
Psychologist, UC, Berkeley; Author, The Gardener and the Carpenter

Its commonplace, in both scientific and popular writing to talk about innate human traits, "hard-wired" behaviors or "genes for" everything from alcoholism to intelligence. Sometimes these traits are supposed to be general features of human cognition—sometimes they are supposed to be individual features of particular people. The nature/nurture distinction continues to dominate thinking about development. But its time for innateness to go.

Of course, for a long time, people have pointed out that nature and nurture must interact for a particular trait to develop. But several recent scientific developments challenge the idea of innate traits in a deeper way. It isn't just that it's a little of both, some mix of nurture and nature, but that the distinction itself is fundamentally misconceived.

One development is the very important new work exploring what are called epigenetic accounts of development, and the new empirical evidence for those epigenetic processes. These studies show the many complex ways that gene expression, which is what ultimately leads to traits, is itself governed by the environment.

Take the maternal mice. Meaney and colleagues took two different but genetically identical strains of mice which normally develop different degrees of intelligence and cross-fostered them—the smart mice mothers raised the dumb mice pups. The result was that the dumb mice developed problem-solving abilities similar to those of the smart ones and this was even passed on to the next generation. So were the mice innately dumb or innately smart? The very question doesn't make sense.

Here's a similar human example. There is increasing evidence for an early temperament difference between "orchids" and "dandelions". Children with some genetic and physiological profiles appear to be more influenced by the environment, both for good and bad. For example, a recent study looked at the level of Respiratory Sinus Arrhythmia, basically the relation between heartrate and breathing, in at-risk poor children. They discovered that children with high RSA who had secure relationships with their parents had fewer behavior problems later than low RSA children. But the relationship was reversed for the children who had difficult relationships—they actually had more problems. So were the children innately more or less difficult or troubled?

The increasingly influential Bayesian models of human learning, models that have come to dominate recent accounts of human cognition, also challenge the idea of innateness in a different way. At least since Chomsky, there have been debates about whether we have innate knowledge. The Bayesian picture characterizes knowledge in terms of a set of potential hypotheses about the world. We initially believe that some hypotheses are less probable and others are more so. As we collect new evidence we can rationally update the probability of these hypotheses. We can discard what initially looked very likely and eventually accepting ideas that started out as longshots.

If this picture is right there is some sense in which everything we will ever think is potentially there from the start. But it is also true that everything we think is subject to revision and change with increasing evidence. From this probabilistic perspective it also isn't at all clear what it would mean to talk about whether knowledge is innate or learned. You might say instead that some hypotheses initially have a very low or very high probability of being confirmed by further evidence. But the hypotheses and evidence are inextricably intertwined.

The third development is increasing evidence for a new picture of the evolution of human cognition. The old "Swiss Army Knife" picture of capital E capital P "Evolutionary Psychology" with the evolution of myriad different constrained "modules" looks increasingly implausible. Instead, the more recent and more biologically plausible picture is that the developments involved more general developmental changes. These included an increase in the Bayesian learning abilities I just described, increased cultural transmission, wider parental investment, longer developmental trajectories, and greater capacities for counterfactual thinking. All this led to feedback loops that rapidly transformed human behavior.

The evolutionary theorist Eva Jablonka has described the evolution of human cognition as more like the evolution of a hand—a multipurpose flexible tool capable of performing unprecedented behaviors and solving unprecedented problems—than like the construction of a Swiss Army Knife.

In particular, a number of theorists have argued that the difference between the early emergence of "anatomically modern" humans and the much later emergence of "behaviorally modern" ones is due to these feedback loops rather than to some genetic change.

For example, small changes in the capacity for cultural learning and the period of protected childhood in which that learning can take place, could initially lead to small changes in behavior. But the "cultural ratchet" effect could lead to the rapid and accelerating transformation of behavior over generations, especially as there was more and more interaction within groups of early humans.

Combining cultural transmission with Bayesian learning means that each generation of children can integrate the cumulative information of early generations. As a result they can imagine alternative ways that the social and physical environment might be structured and can implement those changes. But this means that each successive generation of children will also grow up shaped by a new social and physical environment, unlike the ones that have gone before, and that in turn will lead them to make new discoveries, reshape the environment again and so on, in an accelerating process of cognitive and behavioral transformation.

All three of these scientific developments suggest that almost everything we do is not just the result of the interaction of nature and nurture, it is both simultaneously. Nurture is our nature and learning and culture are our most important and distinctive evolutionary inheritance.

adam_alter's picture
Psychologist; Assistant Professor of Marketing, Stern School of Business, NYU; Author, Irresistible

In 1984, New York became the first state to introduce mandatory seat belt laws. Most of the remaining states applauded the new legislation and followed suit in the 1980s and 1990s, but a small collection of researchers worried that seat belts might paradoxically license people to drive more carelessly. They believed that people drove carefully because they worried they might be seriously injured in an accident; if seat belts diminished the risk of serious injury, they would also diminish the incentive to drive carefully.

There's a danger for social scientists to rely too heavily on the concept of replication in the same way that potentially careless drivers rely too heavily on seatbelts. When we examine new hypotheses, we tolerate the possibility that approximately one in every twenty results is a fluke. If we run the experiment two or three times, and the result replicates, it's safer to assume that the original result was reliable. Students are taught that untruths will be revealed in time through replication-that flimsy results will wither under empirical scrutiny, so the enduring scientific record will reflect only those results that are robust and replicable. Unfortunately, this appealing theory crumbles in practice; just as some drivers rely too heavily on the protection of seatbelts, so psychological scientists rely too heavily on the protection of replication.

As the seatbelt illustration suggests, the problem begins when researchers behave carelessly because they rely too heavily on the theory of replication. Each experiment becomes less valuable and less definitive, so instead of striving to craft the cleanest, most informative experiment, the incentives weigh in favor of running many unpolished experiments instead. Journals are similarly more inclined to publish marginally questionable research on the basis that other researchers will test the reliability of the effect in future research.

In fact, only a limited sample of high-profile findings is replicated, because generally there's less scientific glory in overturning an old finding than in proposing a new one. With limited time and resources, researchers tend to focus on testing new ideas rather than on questioning old ones. The scientific record features thousands of preliminary findings, but relatively few thorough replications, rejoinders, and reconsiderations of those early results.

Without a graveyard of failed effects, it's very difficult to distinguish robust results from brittle flukes. The gravest consequence, then, is that our over-reliance on the theory of replication-the notion that researchers will unmask empirical untruths-is that we overestimate the reliability of the many effects that have yet to be re-examined. Replication is a critical component of the scientific process, but the illusion of replications as an antidote to flimsy effects deserves to be shattered. 

john_mcwhorter's picture
Professor of Linguistics and Western Civilization, Columbia University; Cultural Commentator; Author, Words on the Move

Since the 1930s when Benjamin Lee Whorf was mesmerizing audiences with the idea that the Hopi people's language channeled them into a cyclical sense of time, the media and university classrooms have been often abuzz with the idea that the way your language works gives you a particular worldview.

You just want this to be true, but it isn't—at least in a way that anyone would be interested in it outside of a psychology laboratory (or academic journal). It's high time thinking people let go of the idea, ever heralded as a possibility but never actually demonstrated, that different languages represent different ways of experiencing life.

Different cultures represent different ways of experiencing life, to be sure. And part of a culture is having words and expressions to express it, to be sure. Cell phone. Inshallah. Feng shui. But this isn't what Whorfianism, as it is often called, is on to. The idea is that quiet things in a language's very structural architecture—how its grammar works, how its vocabulary happens to cut up space—channel how the speaker experiences life.

And in fact, psychologists have indeed shown that such things do influence thought—in tiny ways elicitable via fascinatingly peculiar experiments. So, Russian has different words for dark and light blue and no one word that just means blue, and it has been shown that Russians are, indeed, 124 milliseconds faster at identifying grades of dark blue to other ones and grades of light blue to other ones. Or, it has been shown that people whose languages divide nouns into masculine and feminine categories are more likely, if asked, to imagine those things talking in the appropriately sexed voice if they were cartoon characters, or to associate them with gendered traits.

This kind of thing is neat—but the question is whether the quiet background flutterings of awareness they document can be treated as a worldview. The temptation is endless to suppose that it does. Plus we are always reminded that no one has said that language prevents a speaker from thinking anything, but rather that it makes it more likely that the speaker will.

But we still run up against the fact that languages tell us what we don't want to hear as much as they tell us what is cool, such as Russian blues and tables talking like ladies.

Example—in Mandarin Chinese, the same sentence can mean If you see my sister, you know she's pregnant, If you saw my sister, you'd know she's pregnant, and If you had seen my sister, you'd have known she was pregnant. That is, Chinese leaves hypotheticality to context much more than English does. In the early eighties, psychologist Alfred Bloom, following the Whorfian line, did an experiment suggesting that Chinese makes its speakers somewhat less adept at processing hypothetical scenarios than English speakers.

Whoops—nobody wanted to hear that. There was long train of rebuttals, ending in an exhausted draw. But there are all kinds of experiments one could do that would lead to the same kind of place. Lots of languages in New Guinea have only one word for eating, drinking, and smoking. Does that make them slightly less sensitive to the culinary than other people? Or, Swedish doesn't have a word for wipe—you have to erase, take off, etc. But who's ready to tell the Swedes they don't wipe?

In cases like this our natural inclination is to say that such things are just accidents, and that whatever wisp of thought difference an experimenter could elicit on their basis hardly has anything to do with what the language's speakers are like—or what their worldview is. But then, we have to admit the same thing about the wisps that happen to tickle our fancies.

What creates a worldview is culture—i.e., a worldview. And no, it won't work to say that culture and language create a worldview together holistically. Remember, that would mean that Chinese speakers are—holistically—a little dim when it comes to thinking beyond reality.

Who wants to go there? Especially when even starting to, decade after decade, leads us down blind alleys? Hopi, it turned out, has plenty of markers of good old-fashioned European-style time. Or, Yale economist Keith Chen's recent idea that not having a future tense makes a language's speaker more thrifty—pause to wrap your head around that: it's not having a future that makes you save money!—has intrigued the media for years now. But if four Slavic languages like Russian and Polish all do not have a future tense, and yet savings rates among their countries are vastly different, then the whole idea is out the window.

The idea that language is a lens on life should be treated as what it is—something that pans out in terms of quiet results in intense psychological studies but has nothing to do with any humanistic perspective on what it means to be a human being. An awkward aspect of this is that people engaged in trying to document or save the hundreds of languages worldwide that are threatened with extinction tend to say the languages must survive because they represent ways of looking at the world. But if they don't, we have to formulate new justifications for those rescue efforts. Hopefully, linguists and anthropologists can embrace saving languages simply because they are, in so many ways, magnificent in their own right?

What it comes down to is this. Let's ask how English makes a worldview. Our answer requires that the worldview be one shared by Betty White, William McKinley, Amy Winehouse, Jerry Seinfeld, Kanye West, Elizabeth Cady Stanton, Gary Coleman, Virginia Woolf and and Bono.

Let's face it, what worldview would that be? Sure, a lab test could likely tease out some infinitesimal squeak of a perceptive predilection shared by all of those people. But none of us would even begin to think of it as a way of perceiving the world or reflecting a culture. Or, if anyone would, then we are on to an entirely new academic paradigm indeed.

freeman_dyson's picture
Physicist, Institute of Advanced Study; Author, Disturbing the Universe; Maker of Patterns

Fourscore and seven years ago, Erwin Schrödinger   invented wave-functions as a way to describe the behavior of atoms and other small objects. According to the rules of quantum mechanics, the motions of objects are unpredictable. The wave-function tells us only the probabilities of the possible motions. When an object is observed, the observer sees where it is, and the uncertainty of the motion disappears. Knowledge removes uncertainty. There is no mystery here.

Unfortunately, people writing about quantum mechanics often use the phrase "collapse of the wave-function" to describe what happens when an object is observed. This phrase gives a misleading idea that the wave-function itself is a physical object. A physical object can collapse when it bumps into an obstacle. But a wave-function cannot be a physical object. A wave-function is a description of a probability, and a probability is a statement of ignorance. Ignorance is not a physical object, and neither is a wave-function. When new knowledge displaces ignorance, the wave-function does not collapse; it merely becomes irrelevant.

emanuel_derman's picture
Professor, Financial Engineering, Columbia University; Author, Models.Behaving.Badly

I grew up among physicists, whose modus operandi is to observe the world, experiment with it, develop hypotheses and theories and models, suggest further experiments, and use statistics to analyze the results, thereby comparing mental imaginings with actual events. Statistics is simply their tool for confirmation or denial. 

But nowadays the world, and especially the world of the social sciences, is increasingly in love with statistics and data science as a source of knowledge and truth itself. Some people have even claimed that computer-aided statistical analysis of patterns will replace our traditional methods of discovering the truth, not only in the social sciences and medicine, but in the natural sciences too.

I believe we must be careful not to get too enamored of statistics and data science and thereby abandon the classical methods of discovering the great truths about nature (and man is nature too). A good example of the classical power is Kepler's 17th Century discovery of his second law of planetary motion, which is in fact less a law than the recognition and description of a pattern. Kepler's second law states that the line between the sun and a moving planet sweeps out equal areas in equal times. This deep symmetry of planetary motion implies that the closer a planet is to the sun, the more rapidly it moves along its orbit. But notice that there is no line between a planet and the sun. Kepler's still astonishing insight required examining Tycho Brahe's data, a long mental struggle, a burst of intuition—use an invisible line!—and then checking his hypothesis. Data, intuition, hypothesis, and finally comparison with data is the time-honored process.

Kepler's second law is in fact a statement of the conservation of angular momentum that followed later from Newton's theories of motion and gravitation. Newton's theories were so readily and immediately accepted because Kepler's three verified laws could be derived from them. John Maynard Keynes wrote of Newton three hundred years later: "I fancy his pre-eminence is due to his muscles of intuition being the strongest and most enduring with which a man has ever been gifted."

Statistics—the field itself—is a kind of Caliban, sired somewhere on an island in the region between mathematics and the natural sciences. It is neither purely a language nor purely a science of the natural world, but rather a collection of techniques to be applied, I believe, to test hypotheses. Statistics in isolation can seek only to find past tendencies and correlations, and assume that they will persist. But in a famous unattributed phrase, correlation is not causation.

Science is a battle to find causes and explanations amidst the confusion of data. Let us not get too enamored of data science, whose great triumphs so far are mainly in advertising and persuasion. Data alone has no voice. There is no "raw" data, as Kepler's saga shows. Choosing what data to collect and how to think about it takes insight into the invisible; making good sense of the data collected requires the classic conservative methods: intuition, modeling, theorizing, and then, finally, statistics.

haim_harari's picture
Physicist, former President, Weizmann Institute of Science; Author, A View from the Eye of the Storm

The discovery of the Higgs particle (aka God's Particle, aka "the Goddamn particle", according to Leon Lederman) allegedly closes the chapter of establishing the Standard Model of particle physics, or at least so we read in the newspapers and in the announcements from Stockholm. The introduction of this idea, five decades ago, was indeed an important landmark in the development of the standard model. But, in reality, it does not answer any of the remaining open questions, which have now been plaguing the model for more than thirty years.

Nature has taught us that everything (not really; what about the dark matter and dark energy?) is made of six types of quarks (why six?) and six types of leptons (why six and why the same number?). They are arranged in a very clear pattern, which replicates itself (why?) three times (why three?) in a precise manner. These dozen types of particles have positive or negative electric charges of exactly 0,1,2 or 3 units in multiples of one third of the electron charge (why always only these charges, no others, and why quark charges are even related to lepton charges?). The particle masses can be described only by approximately 20 free parameters, unrelated to each other, appearing to be taken from the results of some bizarre cosmic lottery, ranging over almost 10 orders of magnitude.

Yes, the Higgs concept gives us a tantalizing mechanism by which these particles obtain a mass and are not massless. But this is what creates the problem. Why these masses? Who selected these numbers and why? Can it be that all of physics and, indeed, all of science, are based on creating all of matter in the universe from a dozen objects with totally random mass values, while no one has the faintest idea about their origin?

These mysterious mass values allegedly reflect the strength in which the Higgs particle "couples" to the quarks and leptons. But that is like saying that the weights of a dozen people reflect the fact that, when they step on the scale, these numbers appear there. Not a very satisfying explanation. The true puzzle of the standard model is, as always in physics, "what next?" Something must lie beyond it, solving the puzzle of the dark matter, dark energy, particle masses and their very simple, distinct and repetitive systematic pattern.

The Higgs particle contributes absolutely nothing to the solution of these puzzles, unless the final answer is that the Higgs particle is indeed God's particle and it is God's will that the particle masses are these and no others. Or, perhaps, it is not one God, but a dozen gods with diverse numerical tastes. The good news is that we still have some exciting discoveries ahead of us, deciphering the basic structure of all of matter, beyond the temporary picture offered by the standard model. We certainly do not yet have a theory of everything, not even close. 

jared_diamond's picture
Professor of Geography, University of California Los Angeles; Author, Upheaval

The history of science is much more variegated than assumed in the Edge Question about the abandonment and burial of old ideas. While the view that new ideas triumph by replacing old ones fits some scientific developments, in many other cases new ideas take over a vacuum formerly occupied by no well-articulated idea at all. That happens for either of two reasons: new ideas responding to new information made possible by new measurements, or else responding to new "outlooks." (Among historians of science, the term used rather than the inadequate English term "outlook" is the German word Fragestellung –literally, the posing of a question, but more broadly meaning a world view from which that question can arise). I'll give two or three examples illustrating each of those two reasons.

The most familiar modern example of a new idea made possible by new measurements is Watson's and Crick's double helix model of DNA's structure. Their model didn't replace a previous established model whose opponents gradually died out without abandoning their error. Instead, the model was made possible by two recent sets of measurements: analyses of the chemical composition of DNA (revealing equivalent amounts of the bases adenine and thymine, and of cytosine and guanine); and X-ray crystallographic evidence. As is well known, two models of DNA structure were then proposed nearly simultaneously, by Pauling and by Watson and Crick. It almost immediately became obvious that the former model was wrong, and that the latter model did account for all of the evidence. Hence the Watson and Crick model became rapidly accepted, replacing a vacuum rather than a previous wrong theory.

My other example of an idea made possible by new measurements concerns the origins of animal electricity. Our nerve and muscle membranes operate by conducting electrical impulses, arising from a change in transmembrane voltage between active and inactive membrane regions. In the absence of direct measurements of transmembrane voltage, it was impossible to propose a quantitative theory for how that voltage could change. That problem was solved between 1939 and 1952 by two developments: the anatomist J.Z. Young discovered giant nerves in squid, and physiologists developed microelectrodes small enough to insert into squid giant nerves without damaging them. Between 1945 and 1952 the physiologists Alan Hodgkin and Andrew Huxley took advantage of that anatomical discovery and that technical development to measure the electric currents moving across squid nerve as a combined function of voltage and time, and thereby to reconstruct quantitatively and in detail how a nerve impulse arises from changes in nerve membrane permeability to the positively charged ions sodium and potassium. The Hodgkin-Huxley theory was rapidly accepted because it was so convincingly correct, and because it had no serious competitors. When I was a physiology student in the 1950's and 1960's, the only resistance to the theory that I recall involved some concern by non-physiologists about whether microelectrodes were causing damage to nerve membranes (a concern answered by several types of control experiments), and a non-quantitative proposal that nerve membranes and synapse or junction membranes undergo the same permeability changes (it turned out that they don't).

As for new ideas made possible by a new Fragestellung, consider first the foundation of the several modern sciences constituting population biology: i.e., taxonomy/systematics, evolutionary biology, biogeography, ecology, animal behavior, and genetics. At least until recently, most research in all of those fields except genetics involved observations, counting, and measurements requiring no equipment. Most of that research could have been done by Aristotle, Herodotus, and their contemporaries in classical-age Greece over 2000 years earlier. The Greeks were eminently capable of patient, accurate, quantitative observations of planets and other features of the natural world. Aristotle could similarly have examined Greek animals and plants and arrived at Linnaeus's hierarchical taxonomy; Herodotus could have compared the species of the Black Sea with those of Egypt and thereby founded biogeography; and any ancient Greek could have grown and counted pea varieties as did Gregor Mendel in the 1860's, noticed the differences between Willow Warblers and Chiffchaffs (a related warbler species) as did Gilbert White in 1789, watched young geese as did Konrad Lorenz after 1935, and thereby founded genetics, ecology, and animal behavior. But ancient Greeks lacked the necessary Fragestellung that lent interest to counting pea varieties and scrutinizing warblers and young geese. The rise of those branches of population biology from the 1700's onwards was due to a modern Fragestellung that generated data (without the need for invention of microelectrodes or X-ray crystallography), which in turn generated ideas, in areas where previously there had been neither data nor detailed ideas.

Without going into specifics, I'll mention two other examples of important broad fields that arose only in recent centuries without any need for specialized technology, and that the ancients could have developed but didn't because they lacked a relevant Fragestellung. The Greeks and Romans were in contact with speakers of Indo-European and Semitic and other languages, could have discovered the groupings of languages in those language families, and could thereby have developed the ideas of historical linguistics—but they didn't even bother to record words of their Egyptian, Gaulish, and other subjects. In all of classical Greek and Roman literature I am not aware of a single wordlist recorded for any "barbarian" language, in contrast to the wordlists that European travelers began routinely to gather among non-European peoples from the 1600's onwards. The Greeks and Romans could equally well have noticed the observational evidence used by Freud to explore the unconscious within us—but they didn't.

All of this is not to say that the view underlying this year's Edge Question is always wrong. Examples in the fields in which I work myself include: the replacement of biogeographic theories assuming a static Earth by the acceptance of continental drift, from the 1960's onwards; the rise of the taxonomic approach turned cladistics at the expense of previous taxonomic approaches, also from the 1960's onwards; and the rise in the 1960's and 1970's, followed by the virtual disappearance, of attempts to make use of irreversible (non-equilibrium) thermodynamics in the fields of population biology and of cell physiology. Instead, my main point is that the development of science follows much more diverse courses than only or predominantly the course of abandoning old ideas.

jonathan_haidt's picture
Social Psychologist; Thomas Cooley Professor of Ethical Leadership, New York University Stern School of Business; Author, The Righteous Mind

There are many things in life that are good to have yet bad to pursue too vigorously. Money, love, and sex, for example. I'd like to add parsimony to that list.

William of Ockham was a 14th-century English logician who said that "entities must not be multiplied beyond necessity." That principle—now known as "Occam's Razor"—has been used for centuries by scientists and philosophers as a tool to adjudicate among competing theories. Parsimony means frugality or stinginess, and scientists should be "stingy" when building theories; they should use as little material as possible. If two theories really do exactly as good a job of explaining the empirical evidence, then you should pick the simpler theory. If Copernicus and Ptolemy can both explain the movements of the heavens, including the occasional backwards motion of some planets, then go with Copernicus's far more parsimonious model.

Occam's razor is a great tool when used as originally designed. Unfortunately, many scientists have turned this simple tool into a fetish object. They pursue simple explanations of complex phenomena as though parsimony were an end in itself, rather than a tool to be used in the pursuit of truth.

The worship of parsimony is understandable in the natural sciences, where it sometimes does happen that a single law or principle, or a very simple theory, explains a vast and diverse set of observations. Newton's three laws really do explain the movements of all inanimate objects. Plate tectonics really does explain earthquakes, volcanoes, and the complementary coastlines of Africa and South America. Natural selection really does explain why plants, animals, and fungi look as they do.

But in the social sciences, the overzealous pursuit of parsimony has been a disaster. Since the 18th century, some intellectuals have striven to do for the social world what Newton did for the physical world. Utilitarians, the French philosophes, and other utopian dreamers longed for a social order based on rational principles and a scientific understanding of human behavior. Auguste Comte, one of the founders of sociology, originally called his new discipline "social physics."

And what do we have to show for 250 years of pursuit? We have a series of time-wasting failures and ideological battles. Human behavior cannot all be explained by positive and negative reinforcement (contra the behaviorists). Nor is it all about sex, money, class, power, self-esteem, or even self-interest, to name some of the major explanatory idols worshipped in the 20th century.

In my own field—moral psychology—we've suffered from the same overzealous pursuit of parsimony. Lawrence Kohlberg said morality was all about justice. Others say it's compassion. Others say morality is all about forming coalitions, or preventing harm to victims. But in fact morality is complicated, pluralistic, and culturally variable. Human beings are products of evolution, so the psychological foundations of morality are innate (as I and many others have argued at Edge.org in recent years.) But there are many of these foundations, and they are just the beginning of the story. You must still explain how morality develops in such variable ways around the world, and even among siblings within a single family.

The social sciences are hard because human beings differ fundamentally from inanimate objects. People insist upon making or finding meaning in things. They do it collectively, creating baroque cultural landscapes that can't be explained parsimoniously, and they do it individually, creating their own unique symbolic worlds nested within their broader cultures. As the anthropologist Clifford Geertz put it: "Man is an animal suspended in webs of significance that he himself has spun." This is why it's so hard to predict what any individual will do. This is why there are almost no equations in psychology or sociology. This is why there will never be a Newton in the social sciences.

Let's retire the pursuit of parsimony from the social sciences. Parsimony is beautiful when we find it, but the pursuit of parsimony is sometimes an obstacle to the pursuit of truth.

carlo_rovelli's picture
Theoretical Physicist; Aix-Marseille University, in the Centre de Physique Théorique, Marseille, France; Author, Helgoland; There Are Places in the World Where Rules Are Less Important Than Kindness

We will continue to use geometry as a useful branch of mathematics, but is time to abandon the longstanding idea of geometry as the description of physical space. The idea that geometry is the description of physical space is engrained in us, and might sound hard to get rid of it, but it is unavoidable; it is just a matter of time. Better get rid of it soon.

Geometry developed at first as a description of the properties of parcels of agricultural land. In the hands of ancient Greeks it became a powerful tool for dealing with abstract triangles, lines, circles, and similar, and was applied to describe paths of light and movements of celestial bodies with very great efficacy. In the modern age, with Newton, it became the mathematics of physical space. This geometrization of physical space appeared to be further vindicated by Einstein, who described space (actually, spacetime) in terms of the curved geometry of Riemann. But in fact this was the beginning of the end. Einstein discovered that the Newtonian space described by geometry is in fact a field like the electromagnetic field, and fields are nicely continuous and smooth only if measured at large scales. In reality, they are quantum entities that are discrete and fluctuating. Therefore the physical space in which we are immersed is in reality a quantum dynamical entity, which shares very little with what we call "geometry". It is a pullulating process of finite interacting quanta. We can still use expressions like "quantum geometry" to describe it, but reality is that a quantum geometry is not much of a geometry anymore.

Better getting soon rid of the idea that our spacial intuition is always reliable. The world is far more complicated (and beautiful) than a "geometrical space" and things moving in it. 

max_tegmark's picture
Physicist, MIT; Researcher, Precision Cosmology; Scientific Director, Foundational Questions Institute; President, Future of Life Institute; Author, Life 3.0

I was seduced by infinity at an early age. Cantor's diagonality proof that some infinities are bigger than others mesmerized me, and his infinite hierarchy of infinities blew my mind. The assumption that something truly infinite exists in nature underlies every physics course I've ever taught at MIT, and indeed all of modern physics. But it's an untested assumption, which begs the question: is it actually true? 

There are in fact two separate assumptions: "infinitely big" and "infinitely small". By infinitely big, I mean the idea that space can have infinite volume, that time can continue forever, and that there can be infinitely many physical objects. By infinitely small, I mean the continuum: the idea that even a liter of space contains an infinite number of points, that space can be stretched out indefinitely without anything bad happening, and that there are quantities in nature that can vary continuously. The two are closely related because inflation, the most popular explanation of our Big Bang, can create an infinite volume by stretching continuous space indefinitely.

The theory of inflation has been spectacularly successful, and is a leading contender for a Nobel Prize. It explained how a subatomic speck of matter transformed into a massive Big Bang, creating a huge, flat and uniform universe with tiny density fluctuations that eventually grew into today's galaxies and cosmic large scale structure, all in beautiful agreement with precision measurements from experiments such as the Planck satellite. But by generically predicting that space isn't just big, but truly infinite, inflation has also brought about the so-called measure problem, which I view as the greatest crisis facing modern physics. Physics is all about predicting the future from the past, but inflation seems to sabotage this: when we try to predict the probability that something particular will happen, inflation always gives the same useless answer: infinity divided by infinity. The problem is that whatever experiment you make, inflation predicts that there will be infinitely many copies of you far away in our infinite space, obtaining each physically possible outcome, and despite years of tooth-grinding in the cosmology community, no consensus has emerged on how to extract sensible answers from these infinities. So strictly speaking, we physicists are no longer able to predict anything at all! 

This means that today's best theories similarly need a major shakeup, by retiring an incorrect assumption. Which one? Here's my prime suspect: ∞.

A rubber band can't be stretched indefinitely, because although it seems smooth and continuous, that's merely a convenient approximation: it's really made of atoms, and if you stretch it too much, it snaps. If we similarly retire the idea that space itself is an infinitely stretchy continuum, then a big snap of sorts stops inflation from producing an infinitely big space, and the measure problem goes away. Without the infinitely small, inflation can't make the infinitely big, so you get rid of both infinities in one fell swoop—together with many other problems plaguing modern physics, such as infinitely dense black hole singularities and infinities popping up when we try to quantize gravity. 

In the past, many venerable mathematicians expressed skepticism towards infinity and the continuum. The legendary Carl Friedrich Gauss denied that anything infinite really existed, saying "Infinity is merely a way of speaking" and "I protest against the use of infinite magnitude as something completed, which is never permissible in mathematics." In the past century, however, infinity has become mathematically mainstream, and most physicists and mathematicians have become so enamored with infinity that they rarely question it. Why? Basically, because infinity is an extremely convenient approximation, for which we haven't discovered convenient alternatives. Consider, for example, the air in front of you. Keeping track of the positions and speeds of octillions of atoms would be hopelessly complicated. But if you ignore the fact that air is made of atoms and instead approximate it as a continuum, a smooth substance that has a density, pressure and velocity at each point, you find that this idealized air obeys a beautifully simple equation that explains almost everything we care about: how to build airplanes, how we hear them with sound waves, how to make weather forecasts, etc. Yet despite all that convenience, air of course isn't truly continuous. I think it's the same way for space, time and all the other building blocks of our physical word.

Let's face it: despite their seductive allure, we have no direct observational evidence for either the infinitely big or the infinitely small. We speak of infinite volumes with infinitely many planets, but our observable universe contains only about 1089 objects (mostly photons). If space is a true continuum, then to describe even something as simple as the distance between two points requires an infinite amount of information, specified by a number with infinitely many decimal places. In practice, we physicists have never managed to measure anything to more than about 17 decimal places. Yet real numbers with their infinitely many decimals have infested almost every nook and cranny of physics, from the strengths of electromagnetic fields to the wave functions of quantum mechanics: we describe even a single bit of quantum information (qubit) using two real numbers involving infinitely many decimals. 

Not only do we lack evidence for the infinite, but we don't actually need the infinite to do physics: our best computer simulations, accurately describing everything from the formation of galaxies to to tomorrow's weather to the masses of elementary particles, use only finite computer resources by treating everything as finite. So if we can do without infinity to figure out what happens next, surely nature can too—in a way that's more deep and elegant than the hacks we use for our computer simulations. Our challenge as physicists is to discover this elegant way and the infinity-free equations describing it—the true laws of physics. To start this search in earnest, we need to question infinity. I'm betting that we also need to let go of it.

john_tooby's picture
Founder of field of Evolutionary Psychology; Co-director, Center for Evolutionary Psychology, Professor of Anthropology, UC Santa Barbara

Any first-hand experience of how scientific institutions actually operate drives home an excruciating realization: Science progresses more slowly by orders of magnitude than it could or should. Our species could have science at the speed of thought—science at the speed of inference. But too often we run into Planck's demographic limit on the speed of science—funeral by funeral, with each tock of advancement clocked to the half-century tick of gatekeepers' professional lifespans.

In contrast, the natural clock rate of science at the speed of thought is the flash rate at which individual minds, voluntarily woven into mutually invigorating communities by intense curiosity, can draw and share sequences of strong inferences from data. Indeed, Planck was a giddy optimist, because scientists—like other humans—form coalitional group identities where adherence to group-celebrating beliefs (e.g., we have it basically right) are strongly moralized.

So, the choice is frequently between being "moral" or thinking clearly. Because the bearers of reigning orthodoxies educate and self-select their next generation replacements, mistakes not only propagate down generations, but can grow to Grand Canyon sizes. When this happens, data sets become embedded so deeply into a matrix of mistaken interpretations (as in the human sciences) that they can no longer be seen independently of their obscuring frameworks. So the sociological speed of science can end up being slower even than Planck's glacial demographic speed. 

Worst of all, the flow of discoveries and better theories through institutional choke points is clogged by ideas that are so muddled that they are—in Paul Dirac's telling phrase—not even wrong. Two of the worst offenders are learning, and its partner in crime, culture, a pair of deeply established, infectiously misleading, yet (seemingly) self-evidently true theories.

What alternative to them could there be except an easily falsified, robotic genetic determinism?

Yet countless obviously true scientific beliefs have had to be discarded—a stationary earth, (absolute) space, the solidity of objects, no action at a distance, etc. Like these others, learning and culture seem so compelling because they map closely to automatic, built-in features of how our minds evolved to interpret the world (e.g., learning is a built-in concept in the theory of mind system). But learning and culture are not scientific explanations for anything. Instead, they are phenomena that themselves require explanation.

All "learning" operationally means is that something about the organism's interaction with the environment caused a change in the information states of the brain, by mechanisms unexplained. All "culture" means is that some information states in one person's brain somehow cause, by mechanisms unexplained, "similar" information states to be reconstructed in another's brain. The assumption is that because supposed instances of "culture" (or equally, "learning") are referred to with the same name, they are the same kind of thing. Instead, each masks an enormous array of thoroughly dissimilar things. Attempting to construct a science built around culture (or learning) as a unitary concept is as misguided as attempting to develop a robust science of white things (egg shells, clouds, O-type stars, Pat Boone, human scleras, bones, first generation MacBooks, dandelion sap, lilies…). 

Consider buildings and the things that allow them to influence each other: roads, power lines, water lines, sewage lines, mail, roads, phone landlines, sound, wireless phone service, cable, insect vectors, cats, rodents, termites, dog to dog barking, fire spread, odors, line of sight communication with neighbors, cars and delivery trucks, trash service, door to door salesmen, heating oil delivery, and so on. A science whose core concept was building-to-building influence ("building-culture") would be largely gibberish, just as our "science" of culture as person to person influence has turned out to be.

Culture is the functional equivalent of protoplasm, the supposed (and "observed") substance that by mechanisms unknown carried out vital processes. Now we recognize that protoplasm was magician's misdirection—a black box placeholder for ignorance, eclipsing the bilipid layers, ribosomes, Golgi bodies, proteasomes, mitochondria, centrosomes, cilia, vesicles, spliceosomes, vacuoles, microtubules, lamellipodia, cisternae, etc. that were actually carrying out cellular processes.

Like protoplasm, culture and learning are black boxes, imputed with impossible properties, and masquerading as explanations. They need to be replaced with maps of the diverse cognitive and motivational "organelles" (neural programs) that actually do the work now attributed to learning and culture. They are the La Brea tar pits of the social and behavioral science.  After a century of wrong turns, our scientific vehicles continue to sink ever deeper into these tar pits, and yet we celebrate because these conceptual tars have poured in to fill all explanatory gaps in the human sciences. They unfalsifiably "solve" all apparent problems by stickily obscuring the actual causal specificity that in each case needs to be discovered and mapped. 

We over-attribute our mental content to culture, because the sole supposed alternative is genes. Instead, evolved, self-extracting AI-like expert systems, in interaction with environmental inputs, neurally develop to populate our minds with immense, subtle bodies of content, only some of which are sourced from others. Rather than humans as passive receptacles haplessly filled by "culture", these self-extracting systems make humans active agents robustly building their worlds. Some neural programs, in order to better carry out their particular functions, evolved to supplement their own self-generated content with low-cost, useful information drawn from others ("culture").

But like buildings, humans are linked with many causally distinct pathways built to perform distinct functions. Each brain is bristling with many independent "tubes" that propagate many distinct kinds of stuff to and from a diversity of brain mechanisms in others. So there is fear-of-snakes culture (living "inside" the snake phobia system), grammar culture (living "inside" the language acquisition device), food-preference culture, group identity culture, disgust culture, sharing culture, aggression culture, etc.

Radically different kinds of "culture" live inside distinct computational habitats—that is, habitats built out of different evolved mental programs, and their combinations. What really ties humans together is an encompassing meta-culture—our species' universal cognitive and emotion programs, and the implicit (and hence invisible) universally shared world of meaning they give rise to. Because the adaptive logics of these evolved neural programs can now be mapped, the prospect of a rigorous natural science of humans is open to us. If we could pension off learning and culture, that would remove two obstacles to the human sciences advancing at the speed of thought. 

christine_finn's picture
Archaeologist; Journalist; Author, Artifacts, Past Poetic

Digging for the past has timed out. Digerati are the gatherers now. The law of stratigraphy has held well for archaeology as a means and a concept: the vertical quest exposing time's layers to be read like a book of changes. The exactitude associated the act of going down, with that of going back and understanding human behaviour through geology. The Victorians took up barrow-digging and brought the old stuff home as souvenirs of a Sunday pursuit.

Then archaeologists called it a science, employed the same tools as grave-diggers—spades, buckets—descended six-feet under, and brought exactitude to the trenches. But even Schliemann's 19th century tunnelling through layers of dull—to him—prehistory, in search of gold was in some ways a prelude to what we have now, exposure to an accumulation of relative yesterdays.

We cherry-pick the past. Time-zone concerns are so over. Blogs are a hoard of content, only as fresh as the day they are retrieved. Archive photos and just-taken selfies get uploaded together onto timelines which run laterally. Half-forgotten news hangs around the Internet, and it surfaces—that old school term again—as new news to the fresh viewer.

So what is fieldwork now? Look to the new(ish) field of contemporary archaeology which has its 'excavators' channelling anthropology. These are surface workers, seeing escalating and myriad rates of change as lateral observations which connect a series of presents, which oscillate, and merge new and old. No hands get dirty in this type of dig. But what is dug up tends to linger under the fingers.

edward_slingerland's picture
Professor of Asian Studies, Canada Research Chair in Chinese Thought and Embodied Cognition, University of British Columbia; Author, Drunk

Impressed by the growing explanatory power of the natural sciences of his time, the philosopher David Hume called upon his colleagues to abandon the armchair, turn their attention to empirical evidence, and "hearken to no arguments but those which are derived from experience… [to] reject every system of ethics, however subtle or ingenious, which is not founded on fact and observation." This was over two hundred years ago, and unfortunately not much changed in academic philosophy until about the last decade or two. Pushing past a barrier also associated with Hume—the infamous is-ought or fact-value distinction—a growing number of philosophers have finally begun arguing that our theories should be informed by our best current empirical accounts of how the human minds works, and that an ethical system that posits or requires an impossible psychology should be treated with suspicion.

One of the more robust and relevant bits of knowledge about human psychology that has emerged from the cognitive sciences is that we are not rational minds housed in irrational, emotional bodies. Metaphors like that of Plato's rational charioteer bravely struggling to control his irrational, passionate horses appeal to us because they map well onto our intuitive psychology, but they turn out to be ultimately misleading. A more empirically accurate image would be that of a centaur: rider and horse are one. To the best of our knowledge, there is no ghost in the machine. We are thoroughly embodied creatures, embedded in a complex social and culturally-shaped environment, primarily guided in our daily lives not by cold calculation but hot emotion; not conscious choice but automatic, spontaneous processes; not rational concepts descended from the realm of Forms but rather modal, analogical images.

So, the ironic result of adopting a scientific stance toward human morality is to lay bare the impossibility of a purely scientific morality. The thoroughly rational, evidence-guided utilitarian is as much of a myth as the elusive Homo economicus, and equally as worthy of our disdain. Evolution may be utilitarian, guided solely by considerations of costs and benefits, but the ruthlessly utilitarian process of bio-cultural evolution has produced organisms that are, at a proximate level, incapable of functioning in a completely utilitarian fashion, and for very good design reasons. Because of rational evolutionary considerations, we cannot help but react irrationally to unfair offers in the Ultimatum Game, challenges to our honor, or perceived threat to our loved ones or cherished ideals. We are culturally-infused animals guided largely by automatic habits, barely conscious hunches, profoundly motivating emotions, and wholehearted commitment to spooky, non-empirical entities ranging from human rights to the Word of God to the coming proletarian Utopia.

Science, of course, is so powerful and important because it represents a set of institutional practices and thinking tools that allow us to, qua scientists or intellectuals, bootstrap ourselves out of our immediate perceptions and proximate psychology. We can understand that the earth goes around the sun, that wonderful design can be the product of a blind watchmaker, or that the human mind is, in an important sense, reducible to biological processes. This gives us some helpful leverage over our evolved psychology, and I join many in thinking that this more accurate knowledge about ourselves and our world might allow us to devise—and maybe come to embrace—novel ethical commitments that could lead to more satisfying lives and a more just world. But let's not lose sight of the fact that science cannot bootstrap us out of our evolved minds themselves. The desire to bring about a more equitable, fair and peaceful world is itself an emotion, an ultimately irrational drive grounded in commitment to ideals like human dignity, freedom and well-being that we've inherited—in stripped-down, theologically minimalistic form—from the cultural-religious traditions into which we've been born. In their latest, liberal iterations, these ideals are rather odd—very few cultures have embraced diversity and tolerance as ethical desiderata, for instance—and are far from being universally embraced even in our contemporary world.

So, the myth that we secular liberals have emerged into a neutral place where we stand freed of all belief and superstition, guided solely by rationality, evidence and clearly-perceived self-interest, is something that needs to be retired. It is simply not the case that secular liberalism, grounded in materialist utilitarianism, is the inevitable and default worldview of anyone who is not stupid, brainwashed or uneducated, and thinking so seriously impedes our ability to understand people in earlier historical periods, from other cultures, and even ourselves.

Acknowledging this does not entail wallowing in postmodern relativism or blindly marching to a fundamentalist beat. Scientific inquiry, in its broad sense, is so wildly effective at giving us reliable information about the world that to seriously defend any other method of inquiry as superior—or even equal—is simply perverse. There is also arguably a pragmatic case to be made that secular liberalism is the best worldview that humans have ever come up with, or at least that individuals given a choice tend to preferentially gravitate toward it. In any case, it is our value system, and the very nature of evolved human psychology makes it impossible for us not to want to defend human dignity or women's rights and, when appropriate, impose them on others. But recognizing the limitations of reason allows us to articulate and defend such values in a more effective way. It also allows us to better understand, scientifically, problems such as the causes of religious violence, the roots of persistent international conflicts, or moral challenges such as balancing our folk intuitions about personal responsibility with a neuroscientific understanding of free will. The science of morality requires us to, in the end, get beyond the myth of a perfectly objective scientific morality.

victoria_stodden's picture
Associate Professor of Information Sciences, University of Illinois at Urbana-Champaign

I'm not talking about retiring the abstract idea, or its place in scientific discourse and discovery, but instead I'm suggesting redefining specifically what is meant by that word and using more appropriate terminology for the different research environments scientists work within today.

When the concept of reproducibility was brought into scientific discourse by Robert Boyle in the 1660's, what comprised scientific experimentation and discovery was two-fold: deductive reasoning such as mathematics and logic; and Francis Bacon's relatively new machinery of induction. How to verify correctness was well-established in logical deductive systems at that point, but verifying experimentation was much harder.

Through his attempts with Robert Hooke to establish a vacuum chamber, Boyle made a case that inductive, or empirical, findings—those that arose from observing nature and then drawing conclusions—must be verified by independent replication. It was at this time that empirical research came to be published with sufficient detail regarding procedure, protocols, equipment, and observations such that other researchers would be able to repeat the procedure, and presumably therefore repeat the results.

This conversation is complicated by today's pervasive use of computational methods. Computers are unlike any previous scientific apparatus because they act as a platform for the implementation of a method, rather than directly as an instrument. This creates additional instructions to be communicated as part of Boyle's vision of replicable research—the code, and digital data.

This communication gap has not gone unnoticed in the computational science community and somewhat reminiscent of Boyle's day many voices are currently calling for new standards of scientific communication, this time that include digital scholarly objects such as data and code. Irreproducible computational results from genomics research at Duke University in recent years crystalized attention to this issue, and lead to a report by the Institute of Medicine of the National Academies recommending new standards for clinical trials approval for computational tests arising from computational research.

The report recommended for the first time that the software associated with a computational test be fixed at the beginning of the approval process, and thereafter made "sustainably available." A subsequent workshop at Brown University on "Reproducibility in Computational and Experimental Mathematics" (of which I was a co-organizer) produced recommendations regarding the appropriate information to include when publishing computational findings, including access to code, data, and implementation details. Reproducibility in this context should be relabeled computational reproducibility.

Computational reproducibility can then be distinguished from empirical reproducibility, or Boyle's version of the appropriate communication for non-computational empirical scientific experiments. Making this distinction is important because traditional empirical research is running into a credibility crisis of its own with regard to replication. As Nobel Laureate (and Edgie) Daniel Kahneman has noted in reference to the irreproducibility of certain psychological experiments, "I see a train wreck looming."  

What is becoming clear is that science can no longer be relied upon to generate "verifiable facts." In these cases, the discussion concerns empirical reproducibility, rather than computational reproducibility. But calling both types "reproducibility" muddies the waters and confuses discussion aimed at establishing reproducibility as a standard. I believe there is (at least) one more distinct source of irreproducibility, statistical reproducibility. Addressing issues of reproducibility through improvements to the research dissemination process is important, but insufficient.

We also need to consider new measures to assess the reliability and stability of statistical inferences, including developing new validation measures and expanding the field of uncertainty quantification to develop measures of statistical confidence and a better understanding of sources of error, especially when large multi-source datasets or massive simulations are involved. We can also do a better job of detecting biases arising from statistical reporting conventions that were established in a data-scarce, pre-computational age.

A problem with any one of these three types of reproducibility, empirical, computational, and statistical, can be enough to derail the process of establishing scientific facts. Each type calls for different remedies, from improving existing communication standards and reporting (empirical reproducibility) to making computational environments available for replication purposes (computational reproducibility) to the statistical assessment of repeated results for validation purposes (statistical reproducibility), each with different implementations. Of course these are broad suggestions, and each type of reproducibility can demand different actions depending on the details of the scientific research context, but confusing these very different aspects of the scientific method will slow our resolution of Boyle's old discussion that started with the vacuum chamber.

frank_wilczek's picture
Physicist, MIT; Recipient, 2004 Nobel Prize in Physics; Author, Fundamentals

The distinction between Mind and Matter is embedded in everyday language and thinking, and even more deeply in philosophy and theology The great philosopher/theologian George Berkeley, who famously grounded Matter in the Mind of God, summed it up in a witticism:

What is mind? No matter.
What is matter? Never mind.

Science has long found it useful to accept this duality, as a methodology if not as a doctrine. In modern physics, matter obeys its own mathematical laws, independent of what anyone—even, or maybe especially, God—thinks.

But the distinction is doomed, and its passing will change our view of everything—everything, that is, which is mind/matter.

Already the walls of separation are crumbling. Three developments have irreversibly undermined them:

  • We have learned what matter is. And our new matter, informed over the course of the twentieth century by the revelations of relativity, quantum mechanics, and transformational symmetry, is far stranger and richer in potential than anything our ancestors could have dreamed of. It can dance in intricate, dynamic patterns; it can exploit environmental resources, to self-organize and export entropy.
     
  • We have learned, theoretically through Turing's vision, and practically through the rise of ubiquitous computing, that many accomplishments once viewed as prerogatives of Mind—from playing chess, to planning itineraries, to suggesting friends and sharing interests—are things that machines (whose design hides no secrets), by pure computation, can do quite well.
     
  • We have learned a lot about how the human mind works, as a special capacity of matter. We now know that many aspects of perception begin as specific molecular events. Great challenges remain to bring understanding of memory, emotion, and ultimately creative thought to the same level; but there is every reason to think they too will come into focus. At least, no show-stoppers have  yet appeared. 

The eternal, ever vague "problems" of free will and consciousness will be retired, with due respect, as mechanistic understanding of how human minds actually work brings in more powerful, less nebulous concepts (as has already happened for computation).

More interesting is the question of consequences. Here is a relevant thought experiment: Imagine an artificial intelligence, with human-like insight, contemplating her own blueprint. What would she make of it? I think it's overwhelmingly likely that among her first thoughts would be how to begin making improvements. This processor could be faster, that memory more capacious—and, above all, the reward system more rewarding!

Our heroine would surely be inspired, as I am, by William Blake's prophecy

If the doors of perception were cleansed
Man would see things as they are, Infinite

In bad science fiction, androids are sometimes horrified to learn that they are "mere machines". Following the instruction of the Delphic oracle, to "Know Thyself", we find ourselves making a similar discovery. The wise and mature reaction to the realization that mind and matter are mind/matter, is to take joy in what a wonderful thing mind/matter can be, and is.

howard_gardner's picture
Hobbs Professor of Cognition and Education, Harvard Graduate School of Education; Author, A Synthesizing Mind

When I speak to students or lay audiences about any kind of digital innovation, the first statement or question from the audience takes the form, roughly, "Do smart phones change the brain?" or "We can't let infants play with pads because it might affect their brains." I try to explain that everything that we do affects the nervous system and that the statement is therefore either meaningless or needs to be unpacked. One unpacking would proceed "Does this experience affect the nervous system significantly and perhaps even permanently?" A quite different response: "Do you mean 'affect the mind', or 'affect the brain'?" When the questioner looks blank, I sense that he/she needs a refresher in philosophy, psychology, and neuroscience.

steven_pinker's picture
Johnstone Family Professor, Department of Psychology; Harvard University; Author, Rationality

Would you say that the behavior of your computer or smartphone is determined by an interaction between its inherent design and the way it is influenced by the environment? It's unlikely—such a statement would not be false, but it would be obtuse. Complex adaptive systems have a nonrandom organization, and they have inputs. But speaking of inputs as "shaping" the system's behavior, or pitting its design against its input, would lead to no insight as to how the system works. The human brain is far more complex, and processes its input in more complex ways, than human-made devices, yet many people analyze it in ways that are too simplistic for our far simpler toys. Every term in the equation is suspect.

Behavior: More than half a century after the cognitive revolution, people still ask whether a behavior is genetically or environmentally determined. Yet neither the genes nor the environment can control the muscles directly. The cause of behavior is the brain. While it is sensible to ask how emotions, motives or learning mechanisms have been influenced by the genes, it makes no sense to ask this of behavior itself.

Genes: Molecular biologists have appropriated the term "gene" to refer to stretches of DNA that code for a protein. Unfortunately, this sense differs from the one used in population genetics, behavioral genetics, and evolutionary theory, namely any information carrier that is transmissible across generations and has sustained effects on the phenotype. This includes any aspect of DNA that can affect gene expression, and is closer to what is meant by "innate" than genes in the molecular biologists' narrow sense. The confusion between the two leads to innumerable red herrings in discussions of our makeup, such as the banality that the expression of genes (in the sense of protein-coding stretches of DNA) is regulated by signals from the environment. How else could it be? The alternative is that every cell synthesizes every protein all the time! The epigenetics bubble inflated by the science media is based on a similar confusion.

Environment: This term for the inputs to an organism is also misleading. Of all the energy impinging on an organism, only a subset, processed and transformed in complex ways, has an effect on its subsequent information processing. Which information is taken in, how it is transformed, and how it affects the organism (that is, the way that the organism learns) all depend on the organism's innate organization. To speak of the environment "determining" or "shaping" behavior is unperspicuous.

Even the technical sense of "environment" used in quantitative behavioral genetics is perversely confusing. Now, there is nothing wrong with partitioning phenotypic variance into components that correlate with genetic variation (heritability) and with variation among families ("shared environment"). The problem comes from the so-called "nonshared" or "unique environmental influences." This consists of all the variance that is attributable neither to genetic nor familiar variation. In most studies, it's calculated as 1 – (heritability + shared environment). Practically, you can think of it as the differences between identical twins who grow up in the same home. They share their genes, parents, older and younger siblings, home, school, peers, and neighborhood. So what could make them different? Under the assumption that behavior is a product of genes plus environment, it must be something in the environment of one that is not in the environment of the other.

But this category really should be called "miscellaneous/unknown," because it has nothing necessarily to do with any measurable aspect of the environment, such as one sibling getting the top bunk bed and the other the bottom, or a parent unpredictably favoring one child, or one sibling getting chased by a dog, coming down with a virus, or being favored by a teacher. These influences are purely conjectural, and studies looking for them have failed to find them. The alternative is that this component actually consists of the effects of chance – new mutations, quirky prenatal effects, noise in brain development, and events in life with unpredictable effects.

Stochastic effects in development are increasingly being recognized by epidemiologists, frustrated by such recalcitrant phenomena such as nonagenarian pack-a-day smokers and identical twins discordant for schizophrenia, homosexuality, and disease outcomes. They are increasingly forced to acknowledge that God plays dice with our traits. Developmental biologists have come to similar conclusions. The bad habit of assuming that anything not classically genetic must be "environmental" has blinkered behavioral geneticists (and those who interpret their findings) into the fool's errand of looking for environmental effects for what may be randomness in developmental processes.

A final confusion in the equation is the seemingly sophisticated add-on of "gene-environment interactions." This is also designed to confuse. Gene-environment interactions do not refer to the fact that the environment is necessary for genes to do their thing (which is true of all genes). It refers to a flipflop effect in which genes affect a person one way in one environment but another way in another environment, whereas an alternative genes has a different pattern. For example, if you inherit allele 1, you are vulnerable: a stressor makes you neurotic. If you inherit allele 2, you are resilient: a stressor leaves you normal. With either gene, if you are never stressed, you're normal.

Gene-environment interactions in this technical sense, confusingly, go into the "unique environmental" component, because they are not the same (on average) in siblings growing up in the same family. Just as confusingly, "interactions" in the common-sense sense, namely that a person with a given genotype is predictably affected by the environment, goes into the "heritability" component, because quantitative genetics measures only correlations. This confound is behind the finding that the heritability of intelligence increases, and the effects of shared environment decrease, over a person's lifetime. One explanation is that genes have effects late in life, but another is that people with a given genotype place themselves in environments that indulge their inborn tastes and talents. The "environment" increasingly depends on the genes, rather than being an exogenous cause of behavior. 

rodney_a_brooks's picture
Panasonic Professor of Robotics (emeritus); Former Director, MIT Computer Science and Artificial Intelligence Lab (1997-2007); Founder, CTO, Robust.AI; Author, Flesh and Machines

Throughout history we have used technological systems as metaphors to describe how the body and brain might work. Early on, Greek water technology led to the four humors, and that they must be kept in balance. By the eighteenth century both clock mechanisms and flows of fluids were used as metaphors for what happened in the brain, and by the first half of the twentieth century a common metaphor for the brain was a telephone switching network. Indeed, the mathematics that had been developed for signal propagation in telegraph and telephone wires were used to model action potentials in axons. By the sixties, cyberneticians were using models of negative feedback originally developed for the steam engine, and greatly expanded upon during the war of the forties for controlling the aiming of guns, to try develop models for the brain. But these soon ran out of steam, so to speak, and were supplanted in the general consciousness by metaphors of the brain as a digital computer. One started to hear claims of the brain as the hardware, and the mind as the software, a model that really did not end up helping our understanding of either the brain or the mind very much at all. Throughout the later parts of the twentieth century the brain became a massively parallel digital supercomputer, and now one can find claims that the brain and the world wide web are similar in how they work with webpages and neurons playing similar roles, while hyperlinks and synapses map to each other.

Stepping back from this one might suspect that metaphors for the brain will continue to evolve as technology evolves, with the brain always corresponding to the most complex technology we currently possess. One should therefore expect the metaphors for the brain to continue to evolve along with our technology.

But does the metaphor of the day have impact on the science of the day? I claim that it does, and that the computational metaphor leads researchers to ask questions today that will one day seem quaint, at best.

The power of computation, and computational thinking is immense, and its import for science is still in its infancy. But it is not always helpful to confuse computational approximations with computational theories of a natural phenomenon. For instance, consider a classical model of a single planet orbiting a sun. There is a gravitational model, and the behavior of the two bodies can easily be explained as the solution to a simple differential equation, describing forces and accelerations, and their relationships. The equations can be extended for relativity and for multiple planets and instantaneously those equations describe what a physicist would say is happening in the system. Unfortunately the equations become insoluble by this point, and the best we can do to understand the long-term behavior of the system is to use computation, where time is cut into slices and a digital approximation to the continuous description of the local behaviors is used to run a long term simulation. However, only the most diehard of computationalists (and they do exist) would claim that the planets themselves are "computing" what do at each instant. We know that it is more fruitful to continue to think of the planets as moving under the influence of gravity.

When it comes to explaining the brain, and simpler neural systems, the computational metaphors have taken over, and it is easy to find both language and claims about computation. As one example, we see people talk about neural coding—what is it that is coded in the spike train running along an axon over time? But early neurons evolved to synchronize muscle activity better. For instance jellyfish swim much better if all their swimming muscle activates at once so that they go straight, rather than wobble, and evolution found multiple solutions in different species for this problem. The solutions range from really fast spike propagation to carefully tuned attenuation of signals along the triggering axon and local delays at the muscle fibers dependent on spike strength. Furthermore, in many jellyfish there are multiple neural systems based on different propagation chemistries for different behaviors, and even for different modes of swimming. Just as describing planets as computational systems is not the best way to understand what is going on, thinking of neurons in these simple systems as computational systems sending "messages" to each other, is not the best way for describing the behavior of the system in its environment.

The computational model of neurons of the last sixty plus years excluded the need to understand the role of glial cells in the behavior of the brain, or the diffusion of small molecules effecting nearby neurons, or hormones as ways that different parts of neural systems effect each other, or the continuous generation of new neurons, or countless other things we have not yet thought of. They did not fit within the computational metaphor, so for many they might as well not exist. The new mechanisms that we do discover outside of straight computational metaphors get pasted on to computational models but it is becoming unwieldy, and worse, that unwieldiness is hard to see for those steeped in its traditions, racing along to make new publishable increments to our understanding. I suspect that we will be freer to make new discoveries when the computational metaphor is replaced by metaphors that help us understand the role of the brain as part of a behaving system in the world. I have no clue what those metaphors will look like, but the history of science tells us that they will eventually come along.

david_gelernter's picture
Computer Scientist, Yale University; Chief Scientist, Mirror Worlds Technologies; Author, America-Lite: How Imperial Academia Dismantled our Culture (and ushered in the Obamacrats)

Today computationalists and cognitive scientists—those researchers who see digital computing as a model for human thought and the mind—are nearly unanimous in believing the Grand Analogy and teaching it to their students. And whether you accept it or not, the analogy is milestone of modern intellectual history. It partly explains why a solid majority of contemporary computationalists and cognitive scientists believe that eventually, you will be able to give your laptop a (real not simulated) mind by downloading and executing the right software app. Whereupon if you tell the machine, "imagine a rose," it will conjure one up in its mind, just as you do. Tell it to "recall an embarrassing moment" and it will recall something and feel embarrassed, just as you might. In this view, embarrassed computers are just around the corner.

But no such software will ever exist, and the analogy is false and has slowed our progress in grasping the actual phenomenology of mind. We have barely begun to understand the mind from inside. But what's wrong with this suggestive, provocative analogy? My first reason is old; the other three are new.

1. The software-computer system relates to the world in a fundamentally different way from the mind-brain system. Software moves easily among digital computers, but each human mind is (so far) wedded permanently to one brain. The relationship between software and the world at large is arbitrary, determined by the programmer; the relationship between mind and world is an expression of personality and human nature, and no one can re-arrange it.

There are computers without software, but no brains without minds. Software is transparent. I can read off the precise state of the entire program at any time. Minds are opaque—there is no way I can know what you are thinking unless you tell me. Computers can be erased; minds cannot. Computers can be made to operate precisely as we choose; minds cannot. And so on. Everywhere we look we see fundamental differences.

2. The Grand Analogy presupposes that minds are machines, or virtual machines—but a mind has two equally-important functions, doing and being; a machine is only for doing. We build machines to act for us. Minds are different: yours might be wholly quiet, doing ("computing") nothing; yet you might be feeling miserable or exalted—or you might merely be conscious.

Emotions in particular are not actions, they are ways to be. And emotions—states of being—play an important part in the mind's cognitive work. They allow you, for instance, to feel your way to a cognitive goal. ("He walked to the window to recollect himself, and feel how he ought to behave." Jane Austen, Persuasion.) Thoughts contain information, but feelings (mild wistfulness, say, on a warm summer morning) contain none. Wistfulness is merely a way to be.

Until we understand how to make digital computers feel (or experience phenomenal consciousness), we have no business talking up a supposed analogy between mind:brain and software:computer.

(Those who note that computers-that-can-feel are incredible are sometimes told: "You assert that many billions of tiny, meaningless computer instructions, each unable to feel, could never create a system that feels. Yet neurons are also tiny, "meaningless" and feel nothing--but a hundred billion of those yields a brain that does feel." Which is irrelevant: 100 billion neurons yield a brain that supports a mind, but a hundred billion sandgrains or used tires yields nothing. You need billions of the right article arranged in the right way to get feeling.)

3. The process of growing up is innate to the idea of human being. Social interactions and body structure change over time, and the two sets of changes are intimately connected. A toddler who can walk is treated differently from an infant who can't. No robot could acquire a human-like mind unless it could grow and change physically, interacting with society as it did.

But even if we focus on static, snapshot minds, a human mind requires a human body. Bodily sensations create mind-states that cause physical changes that create further mind-changes. A feedback loop. You are embarrassed; you blush; feeling yourself blush, your embarrassment increases. Your blush deepens.

We don't think with our brains only. We think with our brains and bodies together. We might build simulated bodies out of software—but simulated bodies can't interact in human ways with human beings. And we must interact with other people to become thinking persons.

4. Software is inherently recursive; recursive structure is innate to the idea of software. The mind is not and cannot be recursive.

A recursive structure incorporates smaller versions of itself: an electronic circuit made of smaller circuits, an algebraic expression built of smaller expressions.

Software is a digital computer realized by another digital computer. (You can find plenty of definitions of digital computer.) "Realized by" means made-real-by or embodied-by. The software you build is capable of exactly the same computations as the hardware on which it executes. Hardware is a digital computer realized by electronics (or some equivalent medium).

Suppose you design a digital computer; you embody it using electronics. So you've got an ordinary computer, with no software. Now you design another digital computer: an operating system, like Unix. Unix has a distinctive interface—and, ultimately, the exact same computing power as the machine it runs on. You run your new computer (Unix) on your hardware computer. Now you build a word processor (yet another dressed up digital computer), to run on Unix. And so on, ad infinitum. The same structure (a digital computer) keeps recurring. Software is inherently recursive.

The mind is not and cannot be. You cannot "run" another mind on yours, and a third mind on that, and a fourth atop the third.

In conclusion: much has been gained by mind science's obsession with computing. Computation has been a useful lens to focus scientific and philosophic thinking on the essence of mind. The last generation has seen, for example, a much clearer view of the nature of consciousness. But we have always known ourselves poorly. We still do. Your mind is a room with a view, and we still know the view (objective reality) a lot better than the room (subjective reality). Today subjectivism is re-emerging among those who see through the Grand Analogy. Computers are fine, but it's time to return to the mind itself, and stop pretending we have computers for brains; we'd be unfeeling, unconscious zombies if we had.  

beatrice_golomb's picture
Professor of Medicine at UCSD

For entrée to the mindset behind "psychogenic illness" one need go no farther than the humble hiccup. Consider the published case of the 31year old epileptic retarded institutionalized male with refractory hiccup so severe that his hiccups "caused" melena—doctor-lingo for "black tarry stool," from bleeding high in the gastrointestinal (GI) tract. (The blood oxidizes in transit, turning dark.) When a tube was inserted in his nose for some incidental reason, the man's hiccups ceased - clearly punishment had cured the hiccups, from which it followed that the cause was psychogenic.

Thereafter each time the poor man's hiccups commenced, attendants first menaced him with a nasogastric tube (which never worked), then molested the back of his throat with it, reliably aborting the hiccup bouts. After some months of nasopharynx torment, the man's hiccups resolved, and so did the melena, proving their punishment worked.

Except hiccups don't cause melena, and nasopharyngeal stimulation doesn't cure hiccups by punishment. Manifestly, a GI woe caused the melena, simultaneously irritating the input arm to the hiccup reflex—the vagus nerve, which traverses the GI tract. GI afflictions are the chief cause of persistent hiccup; and nasopharynx stimulation the most effective reported cure for hiccups – working equally in unconscious people, with anesthetic-induced hiccup – presumably oblivious to "punishment." (Like many hiccup cures, this stimulates the vagus higher in its trajectory, interrupting the reflex.)

One defective case does not invalidate a phenomenon. Surely other "psychogenic" hiccup reports rest on a sturdier foundation?

A woman's hiccups were "psychogenic" because, it was announced, they were precipitated by an emotionally significant event. The touted trigger: her daughter's age—the age she had been when herself abused. (Hiccup is the obvious outcome.) Causal affirmation rested on a history of medical maladies triggered by emotionally significant events. A fall on ice was chalked up to an emotional event in the general temporal vicinity. Then there was her history of morbid fear of uterine cancer, so powerful it "caused" uterine bleeding, then led to uterine cancer itself. (The possibility her fear of uterine cancer was justified – indeed, triggered by the abnormal uterine bleeding, which was actually due to the cancer that was later diagnosed—was not considered.)

In other instances, the psychogenic defense rested on cessation of hiccups with sleep. Proof positive. Except for pesky counterexamples a reading of the literature would expose. Like the boy whose recurrent hiccups initially resolved with sleep – but then didn't. And then his medullary brain tumor was diagnosed. (The medulla exerts tonic inhibition to the hiccup reflex; damage disrupts this inhibition.)

A hiccup epidemic in a hospital ward was clearly mass psychogenic illness. Many contracted hiccups, so susceptibility to psychic contagion must have high penetrance. How then have friends, family and hospital roommates of the many other persistent hiccup cases been so spared? Might there be another explanation? How about: actual contagion. Streptococcus singultus had caused epidemic hiccup in the past, and could be passaged in rabbits causing them to hiccup. No effort to hunt for such a cause was made. ("Singultus" is "hiccup" in medicalese.)

A review of this literature in days of yore (graduate school) revealed no report of psychogenic hiccup in which positive evidence corroborated a psychogenic cause. Worse, the foundation for psychogenic illness itself was: supposition. There was no delineation of mechanisms by which such effects putatively occur, much less demand to prove such mechanisms were operating. Nor was there a clear exposition of what, precisely, was meant by psychogenic, which morphs for the convenience of the expositor.

Many psychogenic epithets surrendered to evidence. Ulcers were psychogenic—till Helicobacter pylori and NSAIDS usurped the blame. So was most low back pain. By 1987 Joukamaa et al had it partly right: "little is known about [low back pain's] aetiology, its natural history and its treatment. This may explain why the myth exists that low back pain is often psychogenic". This prescience was undermined (or peer-reviewers courted) when it was added that those afflicted with back pain were, however, apt to harbor neuroses, and on top of that, weak egos—a revision, they proclaimed, from the prior view, in which conversion hysteria and psychosis dominated causes of back pain. (It is remarkable how the advent of workplace ergonomics helped gird weak egos.)

The newly minted Somatic Symptom Disorder is the latest take on psychogenic illness, anointed in the last incarnation of the Diagnostic and Statistical Manual. (This is the tome that guides haruspication. I mean, psychiatry.) This dispenses with even the one requirement, lack of another cause, recognizing the pesky propensity for that lack to be sometimes - horresco referens—remedied, thus discrediting the doctor who declared the problem psychogenic. Now the doctor can skip the tiring pretense of actually looking for a cause, and if one is found anyway, he still saves face by virtue of the patient remaining impugned. (The condition is only "cured" when the patient shuts up about their symptom(s). This helps the doctor and healthcare system. Never mind the patient.)

The emperor never had clothes. The psychogenic designation is logically vacuous, not meaningfully defined so not falsifiable, grounded in petitio principii (circular reasoning)—and functions as an assault. It impedes a search, when warranted, for legitimate conditions, breaches patient-doctor trust, effectively abandons the patient, and blames him for his affliction while also casting the pall of mental infirmity. It adds to (rather than mitigating) the patient's travails, antithetical to the dictum primum non nocere—first do no harm—that had ought to guide medical care.

The psychogenic designation has long presumed that for any other condition, a standard of evidence must be met. Yet, for psychogenics, no standard is demanded: Ipse dixit. Proof by suggestion. Who could believe that? Someone who suffers from the delusion of:

Psychogenic illness—it's all in the doctor's head.

___

NOTE: I don't presume physical ailments cannot have psychological triggers. Some "alternative medicine" approaches proffer putative means to discriminate which cases do, furnish testable hypotheses and effect cures—a standard beyond that which "mainstream" medicine adopts.

michael_shermer's picture
Publisher, Skeptic magazine; Monthly Columnist, Scientific American; Presidential Fellow, Chapman University; Author, Heavens on Earth

The scientific idea that a trait or characteristic of an organism that is hard-wire means that it is a permanent feature should be retired. Case in point: God and religion.

Ever since Charles Darwin theorized in his 1871 book The Descent of Man that "a belief in all-pervading spiritual agencies seems to be universal" and therefore an evolved characteristic of our species that is hardwired into our brains, scientists have been running experiments and conducting surveys to show why God won't go away. Anthropologists have found such human universals as specific supernatural beliefs about death and the afterlife, fortune and misfortune, and especially magic, myths, rituals, divination and folklore. Behavior geneticists report from twin studies—most notably twins separated at birth and raised in different environments—that 40-50% of the variance of God beliefs and religiosity are genetic. Some scientists have even claimed to have found a "God gene" (or more precisely, a "God gene complex") that leads humans to have a need for spiritual transcendence and belief in a higher power of some kind. Even specific elements of religious stories—such as a destructive flood, a virgin birth, miracles, a resurrection from the dead—seem to appear independently of one another over and over again throughout history in a wide variety of cultures, implying that there is a hard-wired component to religion and God beliefs. I have held this theory myself. Until now.

If and when we establish a permanent colony on Mars, if its members consist of nonbelieving scientists with a purely secular worldview it would be interesting to check in 10 (or 100) generations to see if God has returned. Until that experiment is conducted, however, we have to consider the results of natural experiments run here on Earth. In the Western world, for example, a 2013 survey of 14,000 people in 13 nations (Germany, France, Sweden, Spain, Switzerland, Turkey, Israel, Canada, Brazil, India, South Korean, and the UK and US) conducted by the German pollster Bertelsmann Stiftung for their Religion Monitor found that most of these countries showed a declining trend in religiosity and belief in God, especially among the youth. In Spain, for example, 85% of respondents over the age of 45 report being moderately to very religious, but only 58% of those under 29 years of age so report. In Europe in general, only 30-50% said that religion is important in their own lives, and in many European countries less than a third say that they believe in God.

Even in the über religious United States, the pollsters found that 31% of Americans say they are "not religious or not very religious." This finding confirms those of a 2012 Pew Forum survey that found that the fastest growing religious cohort in America are the "Nones" (those with no religious affiliation) at 20% (33% of adults under 30), broken down into atheists and agnostics at 6% and the unaffiliated at 14%. The raw numbers are stunning: with the U.S. adult population (age 18 and over) at 240 million, this translates into 48 million Nones, or 14.4 million atheists/agnostics and 33.6 million unaffiliated. There were also generational differences that reveal a significant trend toward unbelief, with the "Greatest" generation (born 1913-1927) at 5%, the "Silent" generation (born 1928-1945) at 9%, the "Boomers" (born 1946-1964) at 15%, the "GenXers" (born 1965-1980) at 21%, the "Older Millennials" (born 1981-1989) at 30%, and the "Younger Millennials" (born 1990-1994) at 34%.

At this rate I project that the Nones will reach 100% in the year 2220.

It is time for scientists to retire the theory that God and religion are hardwired in our brains. Like everyone else, scientists are subject to cognitive biases that tilt their thinking toward trying to explain common beliefs, so it is good for us to take the long-view perspective and compare today to, say, half a millennia ago when God beliefs were virtually 100%, or to the hunter-gatherer tribes of our Paleolithic ancestors who, while employing any number of superstitious rituals, did not believe in a God or practice a religion that even remotely resembles the deities or religions of modern peoples.

This indicates that religious faith and belief in God is a byproduct of other cognitive processes (e.g., agency detection) and cultural propensities (the need to affiliate) that, while hard-wired, can be expunged through reason and science in the same manner as any number of other superstitious rituals and supernatural beliefs once held by the most learned scholars and scientists of Europe five centuries ago. For example, at that time the prevailing theory to explain crop failures, weather anomalies, diseases, and various other maladies and misfortunes was witchcraft, and the solution was to strap women to pyres and torch them to death. Today, no one in their right mind believes this. With the advent of a scientific understanding of agriculture, climate, disease, and other causal vectors—including the role of chance—the witch theory of causality fell into disuse.

So it has been and will continue to be with other forms of the hard-wired=permanent idea, such as violence. We may be hard-wired for violence, but we can attenuate it considerably through scientifically tested methods. Thus, for my test case here, I predict that in another 500 years the God-theory of causality will have fallen into disuse, and the 21st-century scientific theory that God is hardwired into our brains as a permanent feature of our species will be retired.

hugo_mercier's picture
Cognitive Scientist, French National Center for Scientific Research; Author, Not Born Yesterday

This year's question was inspired by Max Planck's bleak view of scientific change: "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." Certainly, Planck's assessment struck a chord with the general public. Its reception among the more educated public was likely eased when Thomas Kuhn's pointed out that well established scientists would have an incentive to resist novel theories instead of jettisoning their life's work.

If even scientists, with their freedom of discourse and exacting standards of evidence, cannot change their mind when they should, what hope is there for the rest of us? Why bother trying to convince anyone, ever?

Fortunately, Planck was wrong.

Detailed accounts of major scientific changes reveal, time after time, how quickly scientists adopt novel theories—provided they are well supported.

One can hardly blame, for instance, sixteenth century scholars for rejecting Copernicus' heliocentric model: it didn't account for the data much better than the alternatives, it was laden with inelegant post-hoc fixes, and it had no answer to such basic question as, If the Earth is moving, then why can't we feel it? As these issues got resolved—Kepler introducing elliptical orbits, Galileo understanding the principles of motion—the heliocentric model promptly gained supporters.

Other theories that also required dramatic conceptual change were much more quickly accepted, as they rested from the start on better arguments.

When Newton first advanced a new theory of light, one that upset centuries-old beliefs, he did so in a short article that offered little experimental evidence for many of his claims. Yet the cogency of his theory already proved persuasive to many (this was not a case of argument from authority, since Newton had very little then). When, 30 years later, Newton published his Opticks, with a much better presentation of the same theory and a plethora of well described experiments, he took natural philosophers by storm; a few years and a few replications later, most were sold on his ideas.

By taking his belief in the existence of the phlogiston to the grave, Joseph Priestley became a favorite example of the pigheadedness of even brilliant scientists. But Priestley was very much an exception. When Lavoisier started publicizing his discoveries and criticizing the concept of phlogiston, he was met with resistance but also with acceptance—resistance to new theories that were half-baked even in Lavoisier's own mind, acceptance for his solid methods and results. Once the French chemist formulated a theory that could properly account for the main phenomena of interest, it was accepted in a matter of years.

Examples could be multiplied—the heart of Darwin's ideas was accepted by his colleagues shortly after publication of the Origin, plate tectonics went from speculation to textbook example in a dozen years—all showing that when the arguments are good, the vast majority of scientists change their mind accordingly. As the historian of science Bernard Cohen noted, even Planck—whose ideas were no less revolutionary than the other examples mentioned here—managed to convince most of his peers, not only the new generation.

Evidently, not every science reaches a consensus equally quickly—a natural phenomenon, given that political scientists, say, do not have the benefit of data quite as precise as that gathered by particle physicists. Still, it is important to give science, as a whole, its due—not only because such efficient belief change is no mean feat, but also because a pessimistic, cynical view of the power of argumentation can have pernicious effects.

If people who disagree with us are never going to change their mind, then why even talk to them? If we do not engage people who disagree with us in discussion, we will never learn of the—often perfectly good—reasons why they disagree with us. If we cannot address these reasons, then our arguments are likely to prove unconvincing. Our failures to convince will only reinforce the belief that we face pigheadedness rather than rational disagreement. A belief in the inefficiency of argumentation can be a destructive self-fulfilling prophecy. We should give scientists, and argumentation more generally, more credit: it is well deserved. Let's retire Planck's cynical view of scientific change.

douglas_rushkoff's picture
Media Analyst; Documentary Writer; Author, Throwing Rocks at the Google Bus

We don't need to credit an all-seeing God with the creation of life and matter to suspect that something wonderfully strange is going on in the dimension we call reality. Most of us living in it feel invested with a sense of purpose. Whether this directionality is a genuine, pre-existing condition of the universe, an illusion perpetrated by DNA, or something that will one day emerge from social interaction, has yet to be determined. At the very least, this means our experience and expectations of life can no longer be dismissed as impediments to proper observation and analysis.

But science's unearned commitment to materialism has led us into convoluted assumptions about the origins of space-time, in which time itself simply must be accepted as a byproduct of the big bang, and consciousness (if it even exists) as a byproduct of matter. Such narratives follow information on its continuing evolution toward complexity, the singularity, and robot consciousness—a saga no less apocalyptic than the most literal interpretations of Biblical prophecy.

It's entirely more rational—and less steeped in storybook logic—to work with the possibility that time predates matter, and that consciousness is less the consequence of a physical, cause-and-effect reality than a precursor.

By starting with Godlessness as a foundational principle of scientific reasoning, we make ourselves unnecessarily resistant to the novelty of human consciousness, its potential continuity over time, and the possibility that it has purpose.
 

terrence_j_sejnowski's picture
Computational Neuroscientist; Francis Crick Professor, the Salk Institute; Investigator, Howard Hughes Medical Institute; Co-author (with Patricia Churchland), The Computational Brain

In 2004 an epilepsy patient at the UCLA Medical Center whose brain was being monitored to detect the origin of the seizures was shown a series of pictures of celebrities. Electrodes implanted into the memory centers of the patient's brain reported spikes in response to the photos. One of the neurons responded vigorously to several pictures of Jennifer Aniston, but not to other famous people. A neuron in another patient would only respond to pictures of Halle Berry, and even to her name, but not to pictures of Bill Clinton or Julia Roberts or the names of other famous people.

Such cells had been predicted 50 years ago when it first became possible to record from single neurons in the brains of cats and monkeys. It was thought that in the hierarchy of visual areas of the cerebral cortex, the response properties of the neurons became more and more specific the higher the neuron was in the hierarchy, perhaps so specific that a single neuron would only respond to pictures of a single person. This became known as the grandmother cell hypothesis, after the putative neuron in your brain that recognizes your grandmother. The team at UCLA seemed to have found such cells. Single neurons were also found that recognized specific objects and buildings, like the Sydney Opera House.

Despite this striking evidence, the grandmother cell hypothesis is unlikely to be correct, or even a good explanation for these recordings. We are beginning to collect recordings from hundreds of cells simultaneously in mice, monkeys and humans, and these are leading to a different theory for how the cortex perceives and decides. Nonetheless, the grandmother cell hypothesis continues to have adherents, and the thinking that derives from focusing on single neurons still permeates the field of cortical electrophysiology. We would make progress more quickly if we could retire the proverbial grandmother cell.

According to the grandmother cell hypothesis, you perceive your grandmother when the cell is active, so it should not fire to any other stimulus. Only a few hundred pictures were tested, and many more pictures were not tested, so we really don't know how selective the Jennifer Aniston cell was. Second, the likelihood that the electrode by chance happened to record from the only Jennifer Aniston neuron in the brain is low; it is more likely that there are many thousands. The same for the Halle Berry neuron, for everyone you know and every object you can recognize. There are many neurons in the brain, but not enough for each object and name that you know. An even deeper reason to be skeptical of the grandmother cell hypothesis is that the function of a sensory neuron is only partially determined by its response to sensory inputs. Equally important is the output of the neuron and its impact downstream on behavior.

In monkeys where it has been possible to record from many neurons simultaneously, stimuli and task-dependent signals are broadly distributed over large populations of neurons, each tuned to a different combination of features of the stimuli and task detail. The properties of such distributed representations were first studied in artificial neural networks in the 1980s. Populations of simple model neurons called "hidden units" were trained to perform a mapping between a set of input units and set of output units; these hidden units developed patterns of activity for each input that was highly distributed and similar to what has been observed in populations of cortical neurons. For example, the input units could represent faces from many different angles and the output units could represent the names of the people. After being trained on many examples, each of the hidden units coded different combinations of features of the inputs units, such as fragments of eyes, noses or head shapes.

A distributed representation can be used to recognize many versions of the same object, and the same set of neurons can recognize many different objects by differentially weighting their outputs. Moreover, the network can generalize by correctly classifying new inputs from outside the training set. Much more powerful versions of these early neural network models, with over 12 layers of hidden units in a hierarchy like that in our visual cortex and using deep learning to adjust billions of synaptic weights are now able to recognize tens of thousands of objects in images. This is a breakthrough in artificial intelligence because performance continues to improve as the size of the network and number of training examples increases. Companies worldwide are racing to build special purpose hardware that would scale up these architectures. There is still a long way to go before the current systems approach the capacity of the human brain, which has a billion synapses in every cubic millimeter of cortex.

How many neurons are needed in a population that can discriminate between many similar objects such as faces? From imaging studies we know that many areas of the brain respond to faces, some with a high degree of selectivity. We will need to sample many neurons widely from these areas. The answer to this question may be a surprise because there are also sound theoretical arguments for minimizing the numbers of neurons in the representation of an object. First, sparse coding would be more energy efficient. Second, learning a new object in the same population of neurons leads to interference with the others being represented in the population. An effective and efficient representation would be sparsely distributed.

In 10 years a thousand times more neurons will be recorded and manipulated than is now possible and new techniques are being developed to analyze them, which could lead to a deeper understanding of how activity in populations of neurons gives rise to thoughts, emotions, plans and decisions. We may soon know the answer to the question of how many neurons represent an object or a concept in our brain, but will this retire the grandmother cell hypothesis?

sean_carroll's picture
Theoretical Physicist, Caltech; Author, Something Deeply Hidden

In a world where scientific theories often sound bizarre and counter to everyday intuition, and where a wide variety of nonsense aspires to be recognized as "scientific," it's important to be able to separate science from non-science—what philosophers call "the demarcation problem." Karl Popper famously suggested the criterion of "falsifiability"—a theory is scientific if it makes clear predictions that can be unambiguously falsified.

It's a well-meaning idea, but far from the complete story. Popper was concerned with theories such as Freudian psychoanalysis and Marxist economics, which he considered non-scientific. No matter what actually happens to people or societies, Popper claimed, theories like these will always be able to tell a story in which the data are compatible with the theoretical framework. He contrasted this with Einstein's relativity, which made specific quantitative predictions ahead of time. (One prediction of general relativity was that the universe should be expanding or contracting, leading Einstein to modify the theory because he thought the universe was actually static. So even in this example the falsifiability criterion is not as unambiguous as it seems.)

Modern physics stretches into realms far removed from everyday experience, and sometimes the connection to experiment becomes tenuous at best. String theory and other approaches to quantum gravity involve phenomena that are likely to manifest themselves only at energies enormously higher than anything we have access to here on Earth. The cosmological multiverse and the many-worlds interpretation of quantum mechanics posit other realms that are impossible for us to access directly. Some scientists, leaning on Popper, have suggested that these theories are non-scientific because they are not falsifiable.

The truth is the opposite. Whether or not we can observe them directly, the entities involved in these theories are either real or they are not. Refusing to contemplate their possible existence on the grounds of some a priori principle, even though they might play a crucial role in how the world works, is as non-scientific as it gets.

The falsifiability criterion gestures toward something true and important about science, but it is a blunt instrument in a situation that calls for subtlety and precision. It is better to emphasize two more central features of good scientific theories: they are definite, and they are empirical. By "definite" we simply mean that they say something clear and unambiguous about how reality functions. String theory says that, in certain regions of parameter space, ordinary particles behave as loops or segments of one-dimensional strings. The relevant parameter space might be inaccessible to us, but it is part of the theory that cannot be avoided. In the cosmological multiverse, regions unlike our own are unambiguously there, even if we can't reach them. This is what distinguishes these theories from the approaches Popper was trying to classify as non-scientific. (Popper himself understood that theories should be falsifiable "in principle," but that modifier is often forgotten in contemporary discussions.)

It's the "empirical" criterion that requires some care. At face value it might be mistaken for "makes falsifiable predictions." But in the real world, the interplay between theory and experiment isn't so cut and dried. A scientific theory is ultimately judged by its ability to account for the data—but the steps along the way to that accounting can be quite indirect.

Consider the multiverse. It is often invoked as a potential solution to some of the fine-tuning problems of contemporary cosmology. For example, we believe there is a small but nonzero vacuum energy inherent in empty space itself. This is the leading theory to explain the observed acceleration of the universe, for which the 2011 Nobel Prize was awarded. The problem for theorists is not that vacuum energy is hard to explain; it's that the predicted value is enormously larger than what we observe.

If the universe we see around us is the only one there is, the vacuum energy is a unique constant of nature, and we are faced with the problem of explaining it. If, on the other hand, we live in a multiverse, the vacuum energy could be completely different in different regions, and an explanation suggests itself immediately: in regions where the vacuum energy is much larger, conditions are inhospitable to the existence of life. There is therefore a selection effect, and we should predict a small value of the vacuum energy. Indeed, using this precise reasoning, Steven Weinberg did predict the value of the vacuum energy, long before the acceleration of the universe was discovered.

We can't (as far as we know) observe other parts of the multiverse directly. But their existence has a dramatic effect on how we account for the data in the part of the multiverse we do observe. It's in that sense that the success or failure of the idea is ultimately empirical: its virtue is not that it's a neat idea or fulfills some nebulous principle of reasoning, it's that it helps us account for the data. Even if we will never visit those other universes.

Science is not merely armchair theorizing; it's about explaining the world we see, developing models that fit the data. But fitting models to data is a complex and multifaceted process, involving a give-and-take between theory and experiment, as well as the gradual development of theoretical understanding in its own right. In complicated situations, fortune-cookie-sized mottos like "theories should be falsifiable" are no substitute for careful thinking about how science works. Fortunately, science marches on, largely heedless of amateur philosophizing. If string theory and multiverse theories help us understand the world, they will grow in acceptance. If they prove ultimately too nebulous, or better theories come along, they will be discarded. The process might be messy, but nature is the ultimate guide.

daniel_l_everett's picture
Linguistic Researcher; Dean of Arts and Sciences, Bentley University; Author, How Language Began

The idea that human behavior is guided by highly specific innate knowledge has passed its sell-by date. The interesting scientific questions do not encompass either "instinct" or "innate."

This is true for a number of reasons. First, there is never a period in the development of any individual, from their gamete stage to adulthood when they are not being affected by their environment. It is misguided therefore to think that newborns of any species only begin to learn from their environment when they are born. Their cells have been thoroughly bathed in their environment before their parents mated—a bath whose properties are determined by their parents' behavior, environment, and so on. The effects of the environment on development are so numerous, unstudied, and untested in this sense that we currently have no basis for distinguishing environment from innate predispositions or instincts.

Another reason for doubting the usefulness of terms like "insinct" and "innate" is that many things that we believe to be instinctual can change radically when the environment changes radically, even aspects of the environment that we might not have thought relevant. For example, in 2004 a group of scientists carried out experiments on rats in a low-gravity environment of earth orbit. What they discovered was that the self-righting (roughly, the way in which they come to their feet) routine that many had thought to be instinctual was ineffective in low gravity. But the rats didn't simply fail to self-right. They 'invented' a new strategy that worked while they were weightless. They showed behavioral flexibility where none had previously been expected.

In any case, the strongest reason for retiring instinct and innate from scientific thought is the devil of the details, which shows them to be, well, useless. For example, here is a partial list of possible definitions of 'innate' (borrowing from work by Matteo Mameli):

a trait is innate if it is not acquired; a trait is innate if it is present at birth; a trait is innate if it reliably appears during a particular, well-defined stage of life; a trait is innate if it is genetically determined; a trait is innate if it is genetically influenced; a trait is innate if it is genetically encoded; a trait is innate if its development doesn't involve the extraction of information from the environment; a trait is innate if it is not environmentally induced; a trait is innate if it is not possible to produce an alternative trait by means of environmental manipulations; a trait is innate if all environmental manipulations capable of producing an alternative trait are abnormal; a trait is innate if all environmental manipulations capable of producing an alternative trait are statistically abnormal; a trait is innate if all environmental manipulations capable of producing an alternative trait are evolutionarily abnormal; a trait is innate if it is highly heritable; a trait is innate if it is not learned; a trait is innate if (i) the trait is psychologically primitive and (ii) the trait results from normal development; a trait is innate if it is generatively entrenched in the design of an adaptive feature; a trait is innate if it is environmentally canalized, in the sense that it is insensitive to some range of environmental variation; a trait is innate if it is species-typical; a trait is innate if it is pre-functional; etc.

All of these definitions have been shown to be inadequate.

But let's suppose that we can find a workable definition of instinct or innate. We would still not be ready to use these terms. The reason is that we cannot attribute something to the human genotype without some evolutionary account of how it might have gotten there. And such an account would have to offer a scenario by which the trait could have been selected. To do this we would need information about the extent and character of variation in ancestral forms as well as differential survivorship and reproduction of those forms. To know how something was selected, however, we need to know something about the ecology under which the selection took place, such as an answer to questions like what were/are the ecological factors that explain the innate trait, either in the biological or social or other abiotic environment? Next, to use instinct or innate we would need to know how the traits could be passed on to subsequent generations. There should be a correlation between phenotypic traits of parents and offspring greater than chance. Then we would need to know about the population structure during the time of selected. Any evolutionary biologist also knows that we must have information concerning population structure, gene flow and the environment leading to the diffusion of the trait.

We do not know the answers to these questions. We are in no position at present to know the answers. And we will never be able to know some of the answers. Therefore, there simply is no utility to the terms instinct and innate. Let's retire these terms so the real work can begin.

margaret_levi's picture
Sara Miller McCune Director, Center For Advanced Study in Behavioral Sciences, professor, Stanford University; Jere L. Bacharach Professor Emerita of International Studies, University of Washington

Homo economicus is an old idea and a wrong idea, deserving a burial of pomp and circumstance but a burial nonetheless. People can be individualistic and selfish, yes, and under some circumstances narrowly focused on economic wellbeing. But, even those most closely associated with the concept never fully believed it. Hobbes argued that people prefer to act according to the golden rule but that their circumstances often made it difficult. Without rule of law and in a world of theft and predation, people act with defensive selfishness. Adam Smith, whose invisible hand required individual pursuit of narrow interest, recognized that individuals have emotions, sentiments, and morals that influence their thinking. Even Milton Friedman was not sure if narrowly selfish individualism was a correct assumption about human behavior; he didn't care if the supposition was right or wrong but only cared if it was useful. It no longer is.

The theories and models derived from the assumption of homo economicus generally depend on a second, equally problematic assumption: full rationality. Related but distinct sets of scientific findings make suspect each piece of the pairing of narrowly selfish motivation with rational action. Philosophers, such as Nietzsche, and psychoanalytic theorists, such as Sigmund Freud, argued that people acted in a whole variety of ways that were explicable perhaps but were closer to animal instincts than calculative instrumentality. Herbert Simon and certainly Daniel Kahneman and Amos Tversky revealed the extent to which cognitive limitations undermine rational calculations.

Even if individuals can do no better than "satisfice," that wonderful Simon term, they might still be narrowly self-interested, albeit—because of cognitive limitations—ineffective in achieving their ends. This perspective, which is at the heart of homo economicus, must also be laid to rest. Darwin and those influenced by him long recognized that our species, like others, is altruistic at least in the narrow sense of acting to preserve one's gene pool by protecting one's young. Most people do much more than that. The overwhelming finding of experimental research confounds the presumption that, given the opportunity, individuals usually free ride. Indeed, most act according to norms of fairness and reciprocity. Many will make small sacrifices or forego larger returns, and some will even engage in costly action (up to a point) to "do the right thing." Anthropologists and biologists have long provided evidence of the human animal as a social animal. The understanding that individuals are in social networks and communities opens the door to more complex models of reciprocity and ethical obligation. Consequently, social scientists can now account for aggregate outcomes they otherwise could not: large-scale volunteering for the military in times of war, protest behavior, and contributions to public good provision.

The rejection of homo economicus does not mean a total absence of conditions under which narrow self-interest dominates. Experiments suggest very different socializations can produce quite distinct reasoning: economic graduate students are far more likely to free ride than other students. At least two sets of circumstances can induce individualistic selfishness and significantly narrow a person's community of fate, that is, those with whom one feels interdependent and to whom one feels an obligation to help. The first is extreme poverty and the second extreme competition. Those suffering hunger and deprivation tend to focus on meeting their needs. As the growing number of dystopian novels suggest, the result may be theft and murder in the interest of obtaining food, shelter, and security. The classic experiments with rats come to the same conclusions.

Extreme competition, at the least, narrows focus to the goal at hand. In some forms, however, striving to be king of the hill and sometimes literally to be king does provoke something akin to a Hobbesian world. Shakespeare, as he often does, captures the power of circumstance and ambition; his version of the War of the Roses is a testament to narrow self-interested instrumentality dressed in the rhetoric of serving the country. Or witness the recent revelations about business ethics (or, rather, lack of ethics).

That people are often—perhaps more often than not—motivated to act beyond narrow self-interest is fully compatible with the importance of material incentives in motivating behavior. We are all susceptible to rewards, and we all fear punishment. Ceteris paribus, we prefer the first and wish to avoid the second. However, ethics, morality, and the obligations of reciprocity can affect our decisions even when there is considerable money at stake or serious threats to wellbeing. Few are willing to sacrifice everything for a cause or principle, but most of us are willing to sacrifice something.

The reliance on homo economicus as the basis of human motivation has given rise to a grand body of theory and research over the past two hundred years. As an underlying assumption, it has generated some of the best work in economics. As a foil, it has generated findings about cognitive limitations, the role of social interactions, and ethically based motivations. The power of the concept of homo economicus was once great, but its power has now waned, to be succeeded by new and better paradigms and approaches grounded in more realistic and scientific understandings of the sources of human action.

richard_h_thaler's picture
Father of Behavioral Economics; Recipient, 2017 Nobel Memorial Prize in Economic Science; Director, Center for Decision Research, University of Chicago Graduate School of Business; Author, Misbehaving

I have a problem with this question, so I will answer a somewhat different one. I suppose the intent of the question is to point to some ideas have been definitively shown to be either wrong or unhelpful, and so should be dropped from our scientific lexicon. In economics there are certainly many theories, hypotheses and models that are badly flawed descriptions of the behavior of economic agents, so one might think that I would have many nominations for ideas that should be given funerals. But I don't. That is because most of these theories, while demonstrably poor descriptions of reality, are extremely useful as theoretical baselines. As such, it would be a mistake to declare these theories dead.

Before getting to a couple specific examples, it is important to stress that in economics theories usually serve dual purposes. The first purpose is "normative" in the sense that it defines what a rational agent should do. The second purpose of the theory is "descriptive"; that is, it is meant to be an accurate description of how firms actually behave. Economists use the same theory for both purposes, and this leads to problems.

For example, consider the efficient market hypothesis (EMH) first elaborated by my colleague Eugene Fama of the University of Chicago, who recently won the Nobel Prize in economics. The theory has two components. The first is that prices are unpredictable and that you can't beat the market. I call this the No Free Lunch part of the EMH. The second is that asset prices are equal to fundamental value. I call this the Price is Right component. Ever since the EMH was formulated it has been used as a baseline, null hypothesis in financial economics research. In a world consisting of just rational investors both components of the theory would be descriptively accurate, but of course we do not live in such a world. How does the theory stand up in the real world?

If I were fact checking the No Free Lunch part of the theory I would score it "mostly true". It is hard to beat the market, and most people who try fail, including professionally managed mutual funds. It is just "mostly true" because it does seem possible to beat the market, for example by buying "value stocks" whose prices seem low relative to earnings or assets. Still, a strategy of buying cheap index funds that track the market is a sensible one for investors to follow, so believing this part of the theory does little damage.

The other component of the theory, The Price is Right, is both more important and more problematic. Two recent experiences, the tech stock bubble in the late 1990s and the real estate bubble in the early 2000s reveal that prices can diverge to a significant degree from their intrinsic value. The late financial economist Fischer Black, co-inventor of the famous Black-Scholes option pricing formula, once conjectured that asset prices can diverge from their true values by a factor of two. Fischer, who died in 1996, might have revised that estimate to a factor of three had he lived to see the NASDAQ fall from 5000 to 1400 when the tech bubble burst. More than a decade later the NASDAQ is only now reaching the level of 4000 with no adjustment for inflation.

With the two components of the EMH graded partly wrong and badly wrong should we abandon the theory? Hardly. None of the research done by behavioral finance researchers, including my fellow traveller Robert Shiller who shared the Nobel Prize with Fama this year along with Lars Hansen, would have been possible without the EMH benchmark. Shiller's early research showed that prices were too variable, compared to what would be expected in a rational model.

So, if we should not banish the EMH, what should change? The change I would advocate is abolishing the presumption that it is true. Part of Alan Greenspan's reasoning for the Fed not taking any action after hearing a talk from Shiller in 1996 warning of an overheated market was that bubbles were impossible in an efficient market. Even the Supreme Court, in the 1988 case Basic vs. Levinson, ruled that plaintiffs could rely on the efficient market hypothesis in bringing cases alleging misconduct by firms.

The problem here is that users of this concept are neglecting the last word in the phrase "efficient market hypothesis." The same mistake is made in the use of another theory that contributed to a Nobel Prize, Franco Modigliani's life cycle hypothesis. Here the hypothesis is that people figure out how much they are going to make over the course of their lifetime, how much they will earn on their investments, how long they will live, and then solve for the optimal amount to save each year while they are accumulating money, and similarly how to draw down their assets once they retire. Once again this is a useful benchmark, and can be helpful in offering advice to people regarding how much they should be saving for retirement.

It would be a mistake to discard this theory, but it would be a much bigger mistake to presume that it is true. The hypothesis counterfactually assumes that people are capable of solving a very difficult mathematical problem, and are also able to implement such a plan without falling victim to spending temptations along the way. Presuming the theory to be true induced many economists to confidently but wrongly predict that offering people retirement savings plans such as 401(k)s would have no effect on savings since people were already saving the right amount, and would merely shift their saving into the new tax favored plans, costing the government money but producing no new saving. A similar presumption makes the false prediction that small changes such as automatically enrolling participants will have no effect on behavior.

Let's keep these and many other wrong theories and hypotheses alive, but remember they are just hypotheses, not facts.

tania_lombrozo's picture
Professor of Psychology, UC Berkeley

In the beginning, there was dualism. Descartes famously posited two kinds of substance, non-physical mind and material body; Leibniz differentiated mental and physical realms. But dualism faced a challenge: explaining how mind and body interact. The mind executes an intention to raise a finger, and behold, it rises! The body brushes against something sharp, and the mind registers pain.

We now know, of course, that mind and brain are intimately connected. Injuries to the brain can alter perceptual experience, cognitive abilities, and personality. Changes in brain chemistry can do the same. There's no "mental substance" that appears along some phylogenetic branch of our evolutionary history, nor a point in ontogeny during which we receive a non-physical infusion of mind-stuff. We've come a long way from Ambrose Bierce's formulation of the mind in The Devil's Dictionary as "a mysterious form of matter secreted by the brain."

In fact, it appears the mind is just the brain. Or perhaps, to quote Marvin Minsky, "the mind is what the brain does." If we want to understand the mind, we should look to neuroscience and the brain for the real answers.

Or maybe not.

In our enthusiasm to find a scientifically-acceptable alternative to dualism, some of us have gone too far the other way, adopting a stark reductionism. Understanding the mind is not just a matter of understanding the brain. But then, what is it?

It doesn't help that many alternatives to the "mind=brain" equation seem counterintuitive or spooky. For example, some suggest that the mind extends beyond the brain to encompass the whole body or even parts of the environment, or that the mind is not subject to the laws of physics.

Are there other options? Indeed there are. But given that mind and brain are pretty heady matters, it helps to think about a more concrete—and tastier—example. Consider baking.

I'm an antireductionist about baking. It's not that I believe in a "cake substance" that's materially distinct from flour and sugar and leavening. And it's not that I think cakes have some magical metaphysical property (though the best ones sort of do). The tenets of baking antireductionism are far less controversial, and they stem from what we want our "theory of baking" to provide. We want to understand why some cakes turn out better than others, and what we can do to achieve better baked goods in the future. Should we change an ingredient? Mix the batter less vigorously?

Answering these questions can appeal to chemistry and physics. But a theory of baking wouldn't be very useful if it were formulated in terms of molecules and atoms. As bakers, we want to understand the relationship between—for example—mixing and texture, not between kinetic energy and protein hydration. The relationships between the variables we can tweak and the outcomes that we care about happen to be mediated by chemistry and physics, but it would be a mistake to adopt "cake reductionism" and replace the study of baking with the study of physical and chemical interactions among cake components.

Of course, you could decide that you're not interested in baking, and thus reject the theoretical constructs of my "baking theory" in favor of chemistry and physics. But if you are interested in the project of explaining, predicting, and controlling the quality of your baked goods, then you'll need something like a baking theory to work with.

Now consider the mind. Most of us are interested in a theory of the mind because we want to explain, predict, and control behaviors, mental states, and experiences. Given that mental phenomena are physically realized in the brain, just as cake properties are physically realized by their ingredients and their interactions, it's no surprise that understanding the brain is incredibly useful. But if we want to know—for instance—how to influence minds to achieve particular behaviors, it would be a mistake to look for explanations solely at the level of the brain.

These reflections won't be news to many philosophers, but they're worth repeating. Rejecting the mind in an effort to achieve scientific legitimacy—a trend we've seen with both behaviorism and some popular manifestations of neuroscience—is unnecessary and unresponsive to the aims of scientific psychology. Understanding the mind isn't the same as understanding the brain. Fortunately, though, we can achieve such understanding without abandoning scientific rigor. Or, to adopt another baking analogy, we can have our cake and eat it, too.

daniel_c_dennett's picture
Philosopher; Austin B. Fletcher Professor of Philosophy, Co-Director, Center for Cognitive Studies, Tufts University; Author, From Bacteria to Bach and Back

One might object that the Hard Problem of consciousness (so dubbed by philosopher David Chalmers in his 1996 book, The Conscious Mind) isn't a scientific idea at all, and hence isn't an eligible candidate for this year's question, but since the philosophers who have adopted the term have also persuaded quite a few cognitive scientists that their best scientific work addresses only the "easy" problems of consciousness, this idea qualifies as scientific: it constrains scientific thinking, distorting scientists' imaginations as they attempt to formulate genuinely scientific theories of consciousness. (I won't give examples, since we are instructed to go after ideas, not people, in our answers.)

No doubt on first acquaintance the philosophers' thought experiments succeed handsomely at pumping the intuitions that zombies are "conceivable" and hence "possible" and that this prospect, the (mere, logical) possibility of zombies, "shows" that there is a Hard Problem of consciousness untouched by any neuroscientific theories of how consciousness modulates behavioral control, introspective report, emotional responses, etc., etc. But if the scientists impressed by this "result" from philosophers were to take a good hard look at the critical literature in philosophy exploring the flaws in these thought experiments, they would—I hope—recoil in disbelief. (I am embarrassed by the mere thought of them wading through our literature on these topics.) You see, the arguments implicit in the simple, first-pass thought experiments don't go through without some shoring up. We have to define not just conceivability, but ideal conceivability, and then ideal positive conceivability (as distinct from ideal negative conceivability, etc., etc.). Are perpetual motion machines imaginable but ideally inconceivable, or ideally positively conceivable? It makes a big difference, one is told, whether one can "modally imagine" a zombie. What can you modally imagine, and are you sure? And Frank Jackson's intuition pump about Mary the color scientist prevented from seeing colors has to be embellished with imaginary gadgets that prevent her from dreaming in color, or perhaps she's born color blind (but otherwise with an entirely normal brain!) or perhaps she's fitted with locked-on goggles displaying black and white TV to her poor eyeballs. And that's just a fraction of the complicated fantasies that have been earnestly proposed and rebutted. I am not recommending that scientists do this homework, but if they are curious to see what contortions philosophers will inflict upon themselves in order to "save" these retrograde intuitions, they could consult the superhumanly patient analysis and dismantling of the whole tangled mess in UNC's Amber Ross in her 2013 PhD dissertation, "Inconceivable Minds."

Is the Hard Problem an idea that demonstrates the need for a major revolution in science if consciousness is ever to be explained, or an idea that demonstrates the frailties of human imagination? That question is not settled at this time, so scientists should consider adopting the cautious course that postpones all accommodation with it. That's how most neuroscientists handle ESP and psychokinesis—assuming, defeasibly, that they are figments of imagination. 

nicholas_humphrey's picture
Emeritus Professor of Psychology, London School of Economics; Visiting Professor of Philosophy, New College of the Humanities; Senior Member, Darwin College, Cambridge; Author, Soul Dust

The bigger an animal's brain, the greater its intelligence. You may think the connection is obvious. Just look at the evolutionary lineage of human beings: humans have bigger brains—and are cleverer—than chimpanzees, and chimpanzees have bigger brains—and are cleverer—than monkeys. Or, as an analogy, look at the history of computing machines in the 20th century. The bigger the machines, the greater their number-crunching powers. In the 1970's the new computer at my university department took up a whole room.

From the phrenology of the 19th century, to the brain-scan sciences of the 21st, it has indeed been widely assumed that brain volume determines cognitive capacity. In particular, you'll find the idea repeated in every modern textbook that the brain size of different primate species is causally related to their social intelligence. I admit I'm partly responsible for this, having championed the idea back in the 1970's. Yet, for a good many years now, I've had a hunch that the idea is wrong.

There are too many awkward facts that don't fit in. For a start, we know that modern humans can be born with only two thirds the normal volume of brain tissue, and show next to no cognitive deficit as adults. We know that, during normal human brain development, the brain actually shrinks as cognitive performance improves (a notable example being changes in the "social brain" during adolescence, where the cortical grey matter decreases in volume by about 15% between age 10 and 20). And most surprising of all, we know that there are nonhuman animals, such as honey bees or parrots, that can emulate many feats of human intelligence with brains that are only a millionth (bee) or a thousandth (parrot) the size of a human's.

The key, of course, is programming: What really matters to cognitive performance is not so much the brain's hardware as its onboard software. And smarter software certainly does not require a bigger hardware base (in fact, as the shrinkage of the cortex during adolescence shows, it may actually require a smaller—tidier—one). It's true that programs to deliver superior performance may require a lot of designing, either by natural selection or learning. But the fact is that, once they've been invented, they will likely make less demands on hardware than the older versions. To take the special case of social intelligence, I'd say it's quite possible that the algorithm for solving "theory of mind" problems could be written on the back of a postcard and could be implemented on an iPhone. In which case, the widely touted suggestion that the human brain had to double in size for humans to be capable of "second-order mind-reading", makes little sense.

Then why did the human brain double in size? Why is it much bigger than you might think it needs to be, to underpin our level of intelligence? There's no question that big brains are costly to build and maintain. So, if we are to retire the "obvious theory", what can we put in its place? The answer I'd suggest lies in the advantage of having a large amount of cognitive reserve. Big brains have spare capacity that can be called on if and when working-parts get damaged or wear out. From adulthood onwards humans—like other mammals—begin to lose a significant amount of brain tissue to accidents, haemorrhages and degeneration. But because humans can draw on this extra reserve, the loss doesn't have to show. This means humans can retain their mental powers into relative old age, long after their smaller brained ancestors would have become incapacitated. (And as a matter of fact the unfortunate individual born with an unusually small brain is much more likely to succumb to senile dementia in his forties).

True, many of us die for other reasons with unused brain power to spare. But some of us live considerably longer than we might have done if our brains were half the size. So, what evolutionary advantage does longevity bring, even the post-reproductive longevity typical of humans? The answer surely is that humans can benefit—as no other species could do—from the presence of mentally-sound grandparents and great-grandparents, whose role in caretaking and teaching has been key to the success of human culture.

maria_spiropulu's picture
Shang-Yi Ch’en Professor of Physics, California Institute of Technology; Founder, AQT/INQNET

Naturalness, hierarchy and space-time as invoked today in physics, will be retired sooner than later.

The naturalness "strategy" and hierarchy "problem" for building models towards theories that extend the standard model of particles and their interactions (call it STh, standard theory a la David Gross) are crumbling with the measurements of the newly discovered Higgs-like boson. I call it still H-like until we have measured it exhaustively at the LHC. Nonetheless, we have built ourselves a story for what comes after the H elementary scalar that the real world does not appear to abide by.

So, the slavery of the need to be "natural", not "finely-tuned" (very subjective notions that we should have objected to, much earlier) is being lifted as we speak, and the road to high energy might be surprisingly more complex than what we were envisioning.

Towards the end of the road, and there may be none such if the road curves back at us, there is gravity or space-time that enters the mix of physics notions that are hairy and loopy and we have to upgrade them if not retire them altogether.

On related physics ideas, the notions about the particle nature of dark matter might also crumble.

Some big revolutions (and discoveries) are in store regarding fundamental notions of our quantum universe. 

george_dyson's picture
Science Historian; Author, Analogia

The phrase "science and technology" presumes an inseparability that may not be as secure as we think. There can be science without technology, and there can be technology without science. 

Pure mathematics is one example—from the Pythagoreans to Japanese temple geometry—of a science flourishing without technology. Imperial China developed sophisticated technologies while neglecting science, and it is all too easy to imagine a society that embraces technology but represses science, until only technology remains. Or, one particular species of technology might achieve such dominance that it halts the advance of science in order to preserve itself.

That science has brought us technology does not mean that technology will always bring us science. Science could go into retirement at any time. Retiring the assumption that as long as technology flourishes, so will science, might help us avoid this mistake.

kevin_kelly's picture
Senior Maverick, Wired; Author, What Technology Wants and The Inevitable

What is commonly called "random mutation" does not in fact occur in a mathematically random pattern. The process of genetic mutation is extremely complex, with multiple pathways, involving more than one system. Current research suggests most spontaneous mutations occur as errors in the repair process for damaged DNA. Neither the damage nor the errors in repair have been shown to be random in where they occur, how they occur, or when they occur. Rather, the idea that mutations are random is simply a widely held assumption by non-specialists and even many teachers of biology. There is no direct evidence for it.

On the contrary, there's much evidence that genetic mutation vary in patterns. For instance it is pretty much accepted that mutation rates increase or decrease as stress on the cells increases or decreases. These variable rates of mutation include mutations induced by stress from an organism's predators and competition, and as well as increased mutations brought on by environmental and epigenetic factors. Mutations have also been shown to have a higher chance of occurring near a place in DNA where mutations have already occurred, creating mutation hotspot clusters—a non-random pattern. 

While we can't say mutations are random, we can say there is a large chaotic component, just as there is in the throw of a loaded dice. But loaded dice should not be confused with randomness because over the long run—which is the time frame of evolution—the weighted bias will have noticeable consequences. So to be clear: the evidence shows that chance plays a primary role in mutations, and there would be no natural selection without chance. But it is not random chance. It is loaded chance, with multiple constraints, multi-point biases, numerous clustering effects, and skewed distributions. 

So why does the idea of random mutations persist? The assumption of "random mutation" was a philosophical necessity to combat the erroneous earlier idea of inherited acquired traits, or what is commonly called Lamarckian evolution. As a rough first-order approximation, random mutation works pretty well as an intellectual and experimental framework. But the lack of direct evidence for actual random mutations has now reached a stage where the idea needs to be retired. 

There are several related reasons why this unsubstantiated idea continues to be repeated without evidence. The first is fear that non-random mutations would be misunderstood and twisted by creationists to wrongly deny the reality and importance of evolution by natural selection. The second is that if mutations are not random and have some pattern, than that pattern creates a micro-direction in evolution. And since biological evolution is nothing but micro actions accumulating into macro actions, these micro-patterns leave open the possibility of macro directions in evolution. That raises all kinds of red flags. If there are evolutionary macro-directions, where do they originate? And what are the directions? To date, there is little consensus about evidence for macro-directions in evolution beyond an increase in complexity, but the very notion of evolution with any direction is so contrary to current dogma in modern evolution theory that it continues to embrace the assumption of randomness. 

By retiring the notion of fully random mutations we can gain some practical advantages. The idea that mutations have a bias can be exploited to more easily to engineer genetic processes using those biases. We can better understand the origin of disease mutations, and to remedy them. And with this new understanding we can better resolve some of the remaining mysteries of macro evolution. An important part of retiring the idea of random mutations is to realize that the chance element operating in mutations is not "imperfect" randomness, but rather contains a bit of order that is generative—a small something that can be used by either us or natural selection. What it is used for, or can be used for, is wide open, but we'll never get there if we cling to the idea that mutations are random.