| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >




2011

WHAT SCIENTIFIC CONCEPT WOULD IMPROVE EVERYBODY'S COGNITIVE TOOLKIT?

ANDY CLARK
Philosopher and Cognitive Scientist, University of Edinburgh. Author: Supersizing the Mind: Embodiment, Action, and Cognitive Extension

Predictive Coding

The idea that the brain is basically an engine of prediction is one that will, I believe, turn out to be very valuable not just within its current home (computational cognitive neuroscience) but across the board: for the arts, for the humanities, and for our own personal understanding of what it is to be a human being in contact with the world.

The term 'predictive coding' is currently used in many ways, across a variety of disciplines. The usage I recommend for the Everyday Cognitive Toolkit is, however, more restricted in scope. It concerns the way the brain exploits prediction and anticipation in making sense of incoming signals and using them to guide perception, thought, and action. Used in this way 'predictive coding' names a technically rich body of computational and neuroscientific research (key theorists include Dana Ballard, Tobias Egner, Paul Fletcher, Karl Friston, David Mumford, and Rajesh Rao) . This corpus of research uses mathematical principles and models that explore in detail the ways that this form of coding might underlie perception, and inform belief, choice, and reasoning.

The basic idea is simple. It is that to perceive the world is to successfully predict our own sensory states. The brain uses stored knowledge about the structure of the world and the probabilities of one state or event following another to generate a prediction of what the current state is likely to be, given the previous one and this body of knowledge. Mismatches between the prediction and the received signal generate error signals that nuance the prediction or (in more extreme cases) drive learning and plasticity.

We may contrast this with older models in which perception is a 'bottom-up' process, in which incoming information is progressively built (via some kind of evidence accumulation process, starting with simple features and working up) into a high-level model of the world. According to the predictive coding alternative, the reverse is the case. For the most part, we determine the low-level features by applying a cascade of predictions that begin at the very top; with our most general expectations about the nature and state of the world providing constraints on our successively more detailed (fine grain) predictions.

This inversion has some quite profound implications.

First, the notion of good ('veridical') sensory contact with the world becomes a matter of applying the right expectations to the incoming signal. Subtract such expectations and the best we can hope for are prediction errors that elicit plasticity and learning. This means, in effect, that all perception is some form of 'expert perception', and that the idea of accessing some kind of unvarnished sensory truth is untenable (unless that merely names another kind of trained, expert perception!).

Second, the time course of perception becomes critical. Predictive coding models suggest that what emerges first is the general gist (including the general affective feel) of the scene, with the details becoming progressively filled in as the brain uses that larger context — time and task allowing — to generate finer and finer predictions of detail. There is a very real sense in which we properly perceive the forest before the trees.

Third, the line between perception and cognition becomes blurred. What we perceive (or think we perceive) is heavily determined by what we know, and what we know (or think we know) is constantly conditioned on what we perceive (or think we perceive). This turns out to offer a powerful window on various pathologies of thought and action, explaining the way hallucinations and false beliefs go hand-in-hand in schizophrenia, as well as other more familiar states such as  'confirmation bias' (our tendency to 'spot' confirming evidence more readily than disconfirming evidence).

Fourth, if we now consider that prediction errors can be suppressed not just by changing predictions but by changing the things predicted, we have a simple and powerful explanation for behavior and the way we manipulate and sample our environment. In this view, action is there to make predictions come true and provides a nice account of phenomena that range from homeostasis to the maintenance of our emotional and interpersonal status quo.

Understanding perception as prediction thus offers, it seems to me, a powerful tool for appreciating both the power and the potential pitfalls of our primary way of being in contact with the world. Our primary contact with the world, all this suggests, is via our expectations about what we are about to see or experience. The notion of predictive coding, by offering a concise and technically rich way of gesturing at this fact, provides a cognitive tool that will more than earn its keep in science, law, ethics, and the understanding of our own daily experience.


CLAY SHIRKY
Social & Technology Network Topology Researcher; Adjunct Professor, NYU Graduate School of Interactive Telecommunications Program (ITP); Author, Cognitive Surplus

Pareto Principle

You see the pattern everywhere: the top 1% of the population control 35% of the wealth. On Twitter, the top 2% of users send 60% of the messages. In the health care system, the treatment for the most expensive fifth of patients create four-fifths of the overall cost. These figures are always reported as shocking, as if the normal order of things has been disrupted, as if the appearance of anything other than a completely linear distribution of money, or messages, or effort, is a surprise of the highest order.

It's not. Or rather, it shouldn't be.

The Italian economist Vilfredo Pareto undertook a study of market economies a century ago, and discovered that no matter what the country, the richest quintile of the population controlled most of the wealth. The effects of this Pareto Distribution go by many names — the 80/20 Rule, Zipfs Law, the Power Law distribution, Winner-Take-All — but the basic shape of the underlying distribution is always the same: the richest or busiest or most connected participants in a system will account for much much more wealth, or activity, or connectedness than average.

Furthermore, this pattern is recursive. Within the top 20% of a system that exhibits a Pareto distribution, the top 20% of that slice will also account for disproportionately more of whatever is being measured, and so on. The most highly ranked element of such a system will be much more highly weighted than even the #2 item in the same chart. (The word "the" is not only the commonest word in English, it appears twice as often the second most common, "of".)

This pattern was so common, Pareto called it a "predictable imbalance"; despite this bit of century-old optimism, however, we are still failing to predict it, even though it is everywhere.

Part of our failure to expect the expected is that we have been taught that the paradigmatic distribution of large systems is the Gaussian distribution, commonly known as the bell curve. In a bell curve distribution like height, say, the average and the median (the middle point in the system) are the same — the average height of a hundred American women selected at random will be about 5'4", and the height of the 50th woman, ranked in height order, will also be 5'4".

Pareto distributions are nothing like that — the recursive 80/20 weighting means that the average is far from the middle. This in turn means that in such systems most people (or whatever is being measured) are below average, a pattern encapsulated in the old economics joke: "Bill Gates walks into a bar and makes everybody a millionaire, on average."

The Pareto distribution shows up in a remarkably wide array of complex systems. Together, "the" and "of" account for 10% of all words used in English. The most volatile day in the history of a stock market will typically be twice that of the second-most volatile, and ten times the tenth-most. Tag frequency on Flickr photos obeys a Pareto distribution, as does the magnitude of earthquakes, the popularity of books, the size of asteroids, and the social connectedness of your friends. The Pareto Principle is so basic to the sciences that special graph paper that shows Pareto distributions as straight lines rather than as steep curves is manufactured by the ream.

And yet, despite a century of scientific familiarity, samples drawn from Pareto distributions are routinely presented to the public as anomalies, which prevents us from thinking clearly about the world. We should stop thinking that average family income and the income of the median family have anything to do with one another, or that enthusiastic and normal users of communications tools are doing similar things, or that extroverts should be only moderately more connected than normal people. We should stop thinking that the largest future earthquake or market panic will be as large as the largest historical one; the longer a system persists, the likelier it is that an event twice as large as all previous ones is coming.

This doesn't mean that such distributions are beyond our ability to affect them. A Pareto curve's decline from head to tail can be more or less dramatic, and in some cases, political or social intervention can affect that slope — tax policy can raise or lower the share of income of the top 1% of a population, just as there are ways to constrain the overall volatility of markets, or to reduce the band in which health care costs can fluctuate.

However, until we assume such systems are Pareto distributions, and will remain so even after any such intervention, we haven't even started thinking about them in the right way; in all likelihood, we're trying to put a Pareto peg in a Gaussian hole. A hundred years after the discovery of this predictable imbalance, we should finish the job and actually start expecting it.


KEVIN KELLY
Editor-At-Large, Wired; Author, What Technology Wants

The Virtues of Negative Results

We can learn nearly as much from an experiment that does not work as from one that does. Failure is not something to be avoided but rather something to be cultivated. That's a lesson from science that benefits not only laboratory research, but design, sport, engineering, art, entrepreneurship, and even daily life itself. All creative avenues yield the maximum when failures are embraced. A great graphic designer will generate lots of ideas knowing that most will be aborted. A great dancer realizes most new moves will not succeed. Ditto for any architect, electrical engineer, sculptor, marathoner, startup maven, or microbiologist. What is science, after all, but a way to learn from things that don't work rather than just those that do? What this tool suggests is that you should aim for success while being prepared to learn from a series of failures. More so, you should carefully but deliberately press your successful investigations or accomplishments to the point that they break, flop, stall, crash, or fail.

Failure was not always so noble. In fact much of the world today failure is still not embraced as a virtue. It is a sign of weakness, and often a stigmata that prohibits second chances. Children in many parts of the world are taught that failure brings disgrace, and that one should do everything in one's power to succeed without failure. The rise of the West is in many respects due to the rise in tolerating failure. Indeed many immigrants trained in a failure-intolerant culture may blossom out of stagnancy once moved into a failure-tolerant culture. Failure liberates success.

The chief innovation that science brought to the state of defeat is a way to manage mishaps. Blunders are kept small, manageable, constant, and trackable. Flops are not quite deliberate, but they are channeled so that something is learned each time things fell. It becomes a matter of failing forward.

Science itself is learning how to better exploit negative results. Due to the problems of costly distribution most negative results have not been shared, thus limiting their potential to speed learning for others. But increasingly published negative results (which include experiments that succeed in showing no effects) are becoming another essential tool in the scientific method.

Wrapped up in the idea of embracing failure is the related notion of breaking things to make them better, particularly complex things. Often the only way to improve a complex system is to probe its limits by forcing it to fail in various ways. Software, among the most complex things we make, is usually tested for quality by employing engineers to systematically find ways to crash it. Similarly, one way to troubleshoot a complicated device that is broken is to deliberately force negative results (temporary breaks) in its multiple functions in order to locate the actual disfunction. Great engineers have a respect for breaking things that sometimes surprises non-engineers, just as scientists have a patience with failures that often perplexes outsiders. But the habit of embracing negative results is one of the most essential tricks to gaining success.


ALISON GOPNIK
Psychologist, UC, Berkeley; Author, The Philosophical Baby

The Rational Unconscious

One of the greatest scientific insights of the twentieth century was that most psychological processes are not conscious. But the "unconscious" that made it into the popular imagination was Freud's irrational unconscious — the unconscious as a roiling, passionate id, barely held in check by conscious reason and reflection. This picture is still widespread even though Freud has been largely discredited scientifically.

The "unconscious" that has actually led to the greatest scientific and technological advances might be called Turing's rational unconscious .If the vision of the "unconscious" you see in movies like Inception was scientifically accurate, it would include phalanxes of nerds with slide rules, instead of women in negligees wielding revolvers amid Daliesque landscapes.. At least that might lead the audience to develop a more useful view of the mind if not, admittedly, to buy more tickets.

Earlier thinkers like Locke and Hume anticipated many of the discoveries of psychological science but thought that the fundamental building blocks of the mind were conscious "ideas". Alan Turing, the father of the modern computer, began by thinking about the highly conscious and deliberate step-by-step calculations performed by human "computers" like the women decoding German ciphers at Bletchley Park. His first great insight was that the same processes could be instantiated in an entirely unconscious machine with the same results. A machine could rationally decode the German ciphers using the same steps that the conscious "computers" went through. And the unconscious relay and vacuum tube computers could get to the right answers in the same way that the flesh and blood ones could.

Turing's second great insight was that we could understand much of the human mind and brain as an unconscious computer too. The women at Bletchley Park brilliantly performed conscious computations in their day jobs, but they were unconsciously performing equally powerful and accurate computations every time they spoke a word or looked across the room. Discovering the hidden messages about three- dimensional objects in the confusing mess of retinal images is just as difficult and important as discovering the hidden messages about submarines in the incomprehensible Nazi telegrams, and the mind turns out to solve both mysteries in a similar way.

More recently, cognitive scientists have added the idea of probability into the mix, so that we can describe an unconscious mind, and design a computer, that can perform feats of inductive as well as deductive inference. Using this sort of probabilistic logic a system can accurately learn about the world in a gradual, probabilistic way, raising the probability of some hypotheses and lowering that of others, and revising hypotheses in the light of new evidence. This work relies on a kind of reverse engineering. First work out how any rational system could best infer the truth from the evidence it has. Often enough, it will turn out that the unconscious human mind does just that.

Some of the greatest advances in cognitive science have been the result of this strategy. But they've been largely invisible in popular culture, which has been understandably preoccupied with the sex and violence of much evolutionary psychology (like Freud, it makes for a better movie). Vision science studies how we are able to transform the chaos of stimulation at our retinas into a coherent and accurate perception of the outside world. It is, arguably, the most scientifically successful branch of both cognitive science and neuroscience. It takes off from the idea that our visual system is, entirely unconsciously, making rational inferences from retinal data to figure out what objects are like. Vision scientists began by figuring out the best way to solve the problem of vision, and then discovered, in detail, just how the brain performs those computations.

The idea of the rational unconscious has also transformed our scientific understanding of creatures who have traditionally been denied rationality, such as young children and animals. It should transform our everyday understanding too. The Freudian picture identifies infants with that fantasizing, irrational unconscious, and even on the classic Piagetian view young children are profoundly illogical. But contemporary research shows the enormous gap between what young children say, and presumably what they experience, and their spectacularly accurate if unconscious feats of learning, induction and reasoning. The rational unconscious gives us a way of understanding how babies can learn so much when they consciously seem to understand so little.

Another way the rational unconscious could inform everyday thinking is by acting as a bridge between conscious experience and the few pounds of grey goo in our skulls. The gap between our experience and our brains is so great that people ping-pong between amazement and incredulity at every study that shows that knowledge or love or goodness is "really in the brain" (though where else would it be?). There is important work linking the rational unconscious to both conscious experience and neurology.

Intuitively, we feel that we know our own minds — that our conscious experience is a direct reflection of what goes on underneath. But much of the most interesting work in social and cognitive psychology demonstrates the gulf between our rationally unconscious minds and our conscious experience. Our conscious understanding of probability, for example, is truly awful, in spite of the fact that we unconsciously make subtle probabilistic judgments all the time. The scientific study of consciousness has made us realize just how complex, unpredictable and subtle the relation is between our minds and our experience.

At the same time, to be genuinely explanatory neuroscience has to go beyond "the new phrenology" of simply locating psychological functions in particular brain regions. The rational unconscious lets us understand the how and why of the brain and not just the where. Again, vision science has led the way, with elegant empirical studies showing just how specific networks of neurons can act as computers rationally solving the problem of vision.

Of course, the rational unconscious has its limits. Visual illusions demonstrate that our brilliantly accurate visual system does sometimes get it wrong. Conscious reflection may be misleading sometimes, but it can also provide cognitive prostheses, the intellectual equivalent of glasses with corrective lenses, to help compensate for the limitations of the rational unconscious. The institutions of science do just that.

The greatest advantage of understanding the rational unconscious would be to demonstrate that rational discovery isn't a specialized abstruse privilege of the few we call "scientists", but is instead the evolutionary birthright of us all. Really tapping into our inner vision and inner child might not make us happier or more well-adjusted, but it might make us appreciate just how smart we really are.


NICHOLAS A. CHRISTAKIS
Physician and Social Scientist, Harvard University; Coauthor, Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives

Holism

Some people like to build sand castles, and some like to tear them apart. There can be much joy in the latter. But it is the former that interests me. You can take a bunch of minute silica crystals, pounded for thousands of years by the waves, use your hands, and make an ornate tower. Tiny physical forces govern how each particle interacts with its neighbors, keeping the castle together, at least until the force majeur of a foot appears.

But, having built the castle, this is the part that I like the most: you step back and look at it. Across the expanse of beach, here is something new, something not present before among the endless sand grains, something risen from the ground, something that reflects the scientific principle of holism.

Holism is colloquially summarized as "the whole is greater than the sum of its parts." What is interesting to me, however, are not the artificial instantiations of this principle — when we deliberately form sand into ornate castles or metal into airborne planes or ourselves into corporations — but rather the natural instantiations. The examples are widespread and stunning. Perhaps the most impressive one is that carbon, hydrogen, oxygen, nitrogen, sulfur, phosphorus, iron, and a few other elements, when mixed in just the right way, yield life. And life has emergent properties not present in — nor predictable from — these constituent parts. There is a kind of awesome synergy between the parts.

Hence, I think that the scientific concept that would improve everyone's cognitive toolkit is holism: the abiding recognition that wholes have properties not present in the parts and not reducible to the study of the parts.

For example, carbon atoms have particular, knowable physical and chemical properties. But the atoms can be combined in different ways to make, say, graphite or diamond. The properties of those substances — properties such as darkness and softness and clearness and hardness — are not properties of the carbon atoms, but rather properties of the collection of carbon atoms. Moreover, which particular properties the collection of atoms has depends entirely on how they are assembled — into sheets or pyramids. The properties arise because of the connections between the parts. I think grasping this insight is crucial for a proper scientific perspective on the world. You could know everything about isolated neurons and not be able to say how memory works, or where desire originates.

It is also the case that the whole has a complexity that rises faster than the number of its parts. Consider social networks as a simple illustration. If we have ten people in a group, there are a maximum of 10x9/2=45 possible connections between them. If we increase the number of people to 1,000, the number of possible ties increases to 1,000x999/2=499,500. So, while the number of people has increased by 100-fold (from 10 to 1,000), the number of possible ties (and hence, this one measure of the complexity of the system), has increased by over 10,000-fold.

Holism does not come naturally. It is an appreciation not of the simple, but of the complex, or at least of the simplicity and coherence in complex things. Moreover, unlike curiosity or empiricism, say, holism takes a while to acquire and to appreciate. It is a very grown-up disposition. Indeed, for the last few centuries, the Cartesian project in science has been to break matter down into ever smaller bits, in the pursuit of understanding. And this works, to some extent. We can understand matter by breaking it down to atoms, then protons and electrons and neutrons, then quarks, then gluons, and so on. We can understand organisms by breaking them down into organs, then tissues, then cells, then organelles, then proteins, then DNA, and so on.

But putting things back together in order to understand them is harder, and typically comes later in the development of a scientist or in the development of science. Think of the difficulties in understanding how all the cells in our bodies work together, as compared with the study of the cells themselves. Whole new fields of neuroscience and systems biology and network science are arising to accomplish just this. And these fields are arising just now, after centuries of stomping on castles in order to figure them out.


WILLIAM CALVIN
Neuroscientist, Professor Emeritus, University of Washington in Seattle. Author, Global Fever: How to Treat Climate Change

Find That Frame

An automatic stage of "Compare and contrast" would improve most cognitive functions, not just the grade on an essay. You set up a comparison — say, that the interwoven melodies of Rock 'n' Roll are like how you must twist when dancing on a boat when the bow is rocking up and down in a different rhythm than the deck is rolling from side to side.

Comparison is an important part of trying ideas on for size, for finding related memories, and exercising constructive skepticism. Without it, you can become trapped in someone else's framing of a problem. You often need to know where someone is coming from — and while Compare 'n' Contrast is your best friend, you may also need to search for the cognitive framing. What has been cropped out of the frame can lead the unwary to an incorrect inference, as when they assume that what is left out is unimportant. For example, "We should reach a 2°C (3.6°F) fever in the year 2049" always makes me want to interject "Unless another abrupt climate shift gets us there next year."

Global warming's ramp up in temperature is the aspect of climate change that climate scientists can currently calculate — that's where they are coming from. And while this can produce really important insights — even big emission reductions only delay the 2°C fever for 19 years — it leaves out all of those abrupt climate shifts observed since 1976, as when the world's drought acreage doubled in 1982 and jumped from double to triple in 1997, then back to double in 2005. That's like stairs, not a ramp.

Even if we thoroughly understood the mechanism for an abrupt climate shift — likely a rearrangement of the winds that produce Deluge 'n' Drought by delivering ocean moisture elsewhere, though burning down the Amazon rain forest should also trigger a big one — chaos theory's "butterfly effect" says we still could not predict when a big shift will occur or what size it would be. That makes a climate surprise like a heart attack. You can't predict when. You can't say whether it will be minor or catastrophic. But you can often prevent it — in the case of climate, by cleaning up the excess CO2.

Drawing down the CO2 is also typically excluded from the current climate framing. Mere emissions reduction now resembles locking the barn door after the horse is gone — worthwhile, but not exactly recovery either. Politicians usually love locking barn doors as it gives the appearance of taking action cheaply. Emissions reduction only slows the rate at which things get worse, as the CO2 accumulation still keeps growing. (People confuse annual emissions with the accumulation that causes the trouble.) On the other hand, cleaning up the CO2 actually cools things, reverses ocean acidification, and even reverses the thermal expansion portion of rising sea level.

Recently I heard a biologist complaining about models for insect social behavior: "All of the difficult stuff is not mentioned. Only the easy stuff is calculated." Scientists first do what they already know how to do. But their quantitative results are no substitute for a full qualitative account. When something is left out because it is computationally intractable (sudden shifts) or would just be a guess (cleanup), they often don't bother to mention it at all. "Everybody [in our field] knows that" just won't do when people outside the field are hanging on your every word.

So find that frame and ask about what was left out. Like abrupt climate shifts or a CO2 cleanup, it may be the most important consideration of all.


LAWRENCE KRAUSS
Physicist, Foundation Professor & Director, Origins Project, Arizona State University; Author, A Universe from Nothing; Quantum Man: Richard Feynman's Life in Science

Uncertainty

The notion of uncertainty is perhaps the least well understood concept in science. In the public parlance, uncertainty is a bad thing, implying a lack of rigor and predictability. The fact that global warming estimates are uncertain, for example, has been used by many to argue against any action at the present time.

In fact, however, uncertainty is a central component of what makes science successful. Being able to quantify uncertainty, and incorporate it into models, is what makes science quantitative, rather than qualitative. Indeed, no number, to measurement, no observable in science is exact. Quoting numbers without attaching an uncertainty to them implies they have, in essence, no meaning.

One of the things that makes uncertainty difficult for members of the public to appreciate is that the significance of uncertainty is relative. Take, for example, the distance between the earth and sun, 1.49597 x 108 km. This seems relatively precise, after all using six significant digits means I know the distance to an accuracy of one part in a million or so. However, if the next digit is uncertain, that means the uncertainty in knowing the precise earth-sun distance is larger than the distance between New York and Chicago!

Whether or not the quoted number is 'precise' therefore depends upon what I am intending to do with it. If I only care about what minute the Sun will rise tomorrow then the number quoted above is fine. If I want to send a satellite to orbit just above the Sun, however, then I would need to know distances more accurately.

This is why uncertainty is so important. Until we can quantify the uncertainty in our statements and our predictions, we have little idea of their power or significance. So too in the public sphere. Public policy performed in the absence of understanding quantitative uncertainties, or even understanding the difficulty of obtaining reliable estimates of uncertainties usually means bad public policy.


THOMAS METZINGER
Philosopher, Johannes Gutenberg-Universität Mainz and Frankfurt Institute for Advanced Studies; Author, The Ego Tunnel

Phenomenally Transparent Self-Model

A self-model is the inner representation some information-processing systems have of themselves as a whole. A representation is phenomenally transparent, if it a) is conscious and b) cannot be experienced as a representation. Therefore, transparent representations create the phenomenology of naïve realism, the robust and irrevocable sense that you are directly and immediately perceiving something which must be real. Now apply the second concept to the first: A "transparent self-model", necessarily, creates the realistic conscious experience of selfhood, of being directly and immediately in touch with oneself as a whole.

This concept is important, because it shows how, in a certain class of information-processing systems, the robust phenomenology of being a self would inevitably appear — although they never were, or had, anything like a self. It is empirically plausible that we might just be such systems.


LEE SMOLIN
Physicist, Perimeter Institute; Author, The Trouble With Physics

Thinking In Time Versus Thinking Outside Of Time

One very old and pervasive habit of thought is to imagine that the true answer to whatever question we are wondering about lies out there in some eternal domain of "timeless truths." The aim of re-search is then to "discover" the answer or solution in that already existing timeless domain. For example, physicists often speak as if the final theory of everything already exists in a vast timeless Platonic space of mathematical objects. This is thinking outside of time.

Scientists are thinking in time when we conceive of our task as the invention of genuinely novel ideas to describe newly discovered phenomena, and novel mathematical structures to express them. If we think outside of time, we believe these ideas somehow "existed" before we invented them. If we think in time we see no reason to presume that.

The contrast between thinking in time and thinking outside of time can be seen in many domains of human thought and action. We are thinking outside of time when, faced with a technological or social problem to solve, we assume the possible approaches are already determined by a set of absolute pre-existing categories. We are thinking in time when we understand that progress in technology, society and science happens by the invention of genuinely novel ideas, strategies, and novel forms of social organization.

The idea that truth is timeless and resides outside the universe was the essence of Plato's philosophy, exemplified in the parable of the slave boy that was meant to argue that discovery is merely remembering. This is reflected in the philosophy of mathematics called Platonism, which is the belief that there are two ways of existing. Regular physical things exist in the universe and are subject to time and change, while mathematical objects exist in a timeless realm. The division of the world into a time-drenched Earthly realm of life, death, change and decay, surrounded by a heavenly sphere of perfect eternal truth, framed both ancient science and Christian religion.

If we imagine that the task of physics is the discovery of a timeless mathematical object that is isomorphic to the history of the world, then we imagine that the truth to the universe lies outside the universe. This is such a familiar habit of thought that we fail to see its absurdity: if the universe is all that exists then how can something exist outside of it for it to be isomorphic to?

On the other hand, if we take the reality of time as evident, then there can be no mathematical object that is perfectly isomorphic to the world, because one property of the real world that is not shared by any mathematical object is that it is always some moment. Indeed, as Charles Sanders Pierce first observed, the hypothesis that the laws of physics evolved through the history of the world is necessary if we are to have a rational understanding of why one particular set of laws hold, rather than others.

Thinking outside of time often implies the existence of an imagined realm outside the universe where the truth lies. This is a religious idea, because it means that explanations and justifications ultimately refer to something outside of the world we experience ourselves to be a part of. If we insist there is nothing outside the universe, not even abstract ideas or mathematical objects, we are forced to find the causes of phenomena entirely within our universe. So thinking in time is also thinking within the one universe of phenomena our observations show us to inhabit.

Among contemporary cosmologists and physicists, proponents of eternal inflation and timeless quantum cosmology are thinking outside of time. Proponents of evolutionary and cyclic cosmological scenarios are thinking in time. If you think in time you worry about time ending at space-time singularities. If you think outside of time this is an ignorable problem because you believe reality is the whole history of the world at once.

Darwinian evolutionary biology is the prototype for thinking in time because at its heart is the realization that natural processes developing in time can lead to the creation of genuinely novel structures. Even novel laws can emerge when the structures to which they apply come to exist. Evolutionary dynamics has no need of abstract and vast spaces like all the possible viable animals, DNA sequences, sets of proteins, or biological laws. Exaptations are too unpredictable and too dependent on the whole suite of living creatures to be analyzed and coded into properties of DNA sequences. Better, as Stuart Kauffman proposes, to think of evolutionary dynamics as the exploration, in time, by the biosphere, of the adjacent possible.

The same goes for the evolution of technologies, economies and societies. The poverty of the conception that economic markets tend to unique equilibria, independent of their histories, shows the danger of thinking outside of time. Meanwhile the path dependence that Brian Arthur and others show is necessary to understand real markets illustrates the kind of insights that are gotten by thinking in time.

Thinking in time is not relativism, it is a form of relationalism. Truth can be both time bound and objective, when it is about objects that only exist once they are invented, by evolution or human thought.

When we think in time we recognize the human capacity to invent genuinely novel constructions and solutions to problems. When we think about the organizations and societies we live and work in outside of time we unquestioningly accept their strictures, and seek to manipulate the levers of bureaucracy as if they had an absolute reason to be there. When we think about organizations in time we recognize that every feature of them is a result of their history and everything about them is negotiable and subject to improvement by the invention of novel ways of doing things.


RICHARD FOREMAN
Playwright & Director; Founder, The Ontological-Hysteric Theater

Negative Capability Is A Profound Therapy

Mistakes, errors, false starts — accept them all. The basis of creativity.

My reference point (as a playwright, not a scientist) was Keat's notion of negative capability (from his letters). Being able to exist with lucidity and calm amidst uncertainty, mystery and doubt, without "irritable (and always premature) reaching out" after fact and reason.

This toolkit notion of negative capability is a profound therapy for all manner of ills — intellectual, psychological, spiritual and political. I reflect it (amplify it) with Emerson's notion that "Art (any intellectual activity?) is (best thought of as but) the path of the creator to his work."

Bumpy, twisting roads. (New York City is about to repave my cobblestone street with smooth asphalt. Evil bureaucrats and tunnel-visioned 'scientists" — fast cars and more tacky up-scale stores in Soho.)

Wow! I'll bet my contribution is shorter than anyone else's. Is this my inadequacy or an important toolkit item heretofore overlooked?


| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >