w


| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >




2008

"WHAT HAVE YOU CHANGED YOUR MIND ABOUT?"

LINDA STONE
Former VP, Microsoft & Co-Founder & Director, Microsoft's Virtual Worlds Group/Social Computing Group

Breathtaking New Technologies

In the past few years, I have been thinking and writing about "attention", and specifically, "continuous partial attention". The impetus came from my years of working at Apple, and then, Microsoft, where I thought a lot about user interface as well as our relationship to the tools we create.

I believe that attention is the most powerful tool of the human spirit and that we can enhance or augment our attention with practices like meditation and exercise, diffuse it with technologies like email and Blackberries, or alter it with pharmaceuticals. 

But lately I have observed that the way in which many of us interact with our personal technologies makes it impossible to use this extraordinary tool of attention to our advantage.

In observing others — in their offices, their homes, at cafes — the vast majority of people hold their breath especially when they first begin responding to email. On cell phones, especially when talking and walking, people tend to hyper-ventilate or over-breathe. Either of these breathing patterns disturbs oxygen and CO2 balance.

Research conducted by two NIH scientists, Margaret Chesney and David Anderson, demonstrates that breath holding can contribute significantly to stress-related diseases. The body becomes acidic, the kidneys begin to re-absorb sodium, and as the oxygen and CO2 balance is undermined, our biochemistry is thrown off.

Around this same time, I became very interested in the vagus nerve and the role it played.  The vagus nerve is one of the major cranial nerves, and wanders from the head, to the neck, chest and abdomen.  It's primary job is to mediate the autonomic nervous system, which includes the sympathetic — "fight or flight," and parasympathetic — "rest and digest" nervous systems.   

The parasympathetic nervous system governs our sense of hunger and satiety, flow of saliva and digestive enzymes, the relaxation response, and many aspects of healthy organ function.  Focusing on diaphragmatic breathing enables us to down regulate the sympathetic nervous system, which then causes the parasympathetic nervous system to become dominant.  Shallow breathing, breath holding and hyper-ventilating triggers the sympathetic nervous system, in a "fight or flight" response.

The activated sympathetic nervous system causes the liver to dump glucose and cholesterol into our blood, our heart rate increases, we don't have a sense of satiety, and our bodies anticipate and resource for the physical activity that, historically, accompanied a physical fight or flight response.  Meanwhile, when the only physical activity is sitting and  responding to email, we're sort of "all dressed up with nowhere to go."    

Some breathing patterns favor our body's move toward parasympathetic functions and other breathing  patterns favor a sympathetic nervous system response.  Buteyko (breathing techniques developed by a Russian M.D.), Andy Weil's breathing exercises, diaphragmatic breathing, certain yoga breathing techniques, all have the potential to soothe us, and to help our bodies differentiate when fight or flight is really necessary and when we can rest and digest. 

I've changed my mind about how much attention to pay to my breathing patterns and how important it is to remember to breathe when I'm using a computer, PDA or cell phone. 

I've discovered that the more consistently I tune in to healthy breathing patterns, the clearer it is to me when I'm hungry or not, the more easily I fall asleep and rest peacefully at night, and the more my outlook is consistently positive. 

I've come to believe that, within the next 5-7 years, breathing exercises will be a significant part of any fitness regime.   


STANISLAS DEHEANE
Cognitive Neuropsychology Researcher, Institut National de la Santé, Paris; Author, The Number Sense

The Brain's Schrödinger Equation

What made me change my mind isn't a new fact, but a new theory.

Although a large extent of my work is dedicated to modelling the brain, I always thought that this enterprise would remain rather limited in scope. Unlike physics, neuroscience would never create a single, major, simple yet encompassing theory of how the brain works. There would be never be a single "Schrödinger's equation for the brain".

The vast majority of neuroscientists, I believe, share this pessimistic view. The reason is simple: the brain is the outcome of five hundred million years of tinkering. It consists in millions of distinct pieces, each evolved to solve a distinct yet important problem for our survival. Its overall properties result from an unlikely combination of thousands of receptor types, ad-hoc molecular mechanisms, a great variety of categories of neurons and, above all, a million billion connections criss-crossing the white matter in all directions. How could such a jumble be captured by a single mathematical law?

Well, I wouldn't claim that anyone has achieved that yet… but I have changed my mind about the very possibility that such a law might exist.

For many theoretical neuroscientists, it all started twenty five years ago, when John Hopfield made us realize that a network of neurons could operate as an attractor network, driven to optimize an overall energy function which could be designed to accomplish object recognition or memory completion. Then came Geoff Hinton's Boltzmann machine — again, the brain was seen as an optimizing machine that could solve complex probabilistic inferences. Yet both proposals were frameworks rather than laws. Each individual network realization still required the set-up of thousands of ad-hoc connection weights.

Very recently, however, Karl Friston, from UCL in London, has presented two extraordinarily ambitious and demanding papers in which he presents "a theory of cortical responses".  Friston's theory rests on a single, amazingly compact premise: the brain optimizes a free energy function. This function measures how closely the brain's internal representation of the world approximates the true state of the real world. From this simple postulate, Friston spins off an enormous variety of predictions: the multiple layers of cortex, the hierarchical organization of cortical areas, their reciprocal connection with distinct feedforward and feedback properties, the existence of adaptation and repetition suppression… even the type of learning rule — Hebb's rule, or the more sophisticated spike-timing dependent plasticity — can be deduced, no longer postulated, from this single overarching law.

The theory fits easily within what has become a major area of research — the Bayesian Brain, or the extent to which brains perform optimal inferences and take optimal decisions based on the rules of probabilistic logic. Alex Pouget, for instance, recently showed how neurons might encode probability distributions of parameters of the outside world, a mechanism that could be usefully harnessed by Fristonian optimization. And the physiologist Mike Shadlen has discovered that some neurons closely approximate the log-likelihood ratio in favor of a motor decision, a key element of Bayesian decision making. My colleagues and I have shown that the resulting random-walk decision process nicely accounts for the duration of a central decision stage, present in all human cognitive tasks, which might correspond to the slow, serial phase in which we consciously commit to a single decision. During non-conscious processing, my proposal is that we also perform Bayesian accumulation of evidence, but without attaining the final commitment stage. Thus, Bayesian theory is bringing us increasingly closer to the holy grail of neuroscience — a theory of consciousness.

Another reason why I am excited about Friston's law is, paradoxically, that it isn't simple. It seems to have just the right level of distance from the raw facts. Much like Schrödinger's equation cannot easily be turned into specific predictions, even for an object as simple as a single hydrogen atom, Friston's theory require heavy mathematical derivations before it ultimately provides useful outcomes. Not that it is inapplicable. On the contrary, it readily applies to motion perception, audio-visual integration, mirror neurons, and thousands of other domains — but in each case, a rather involved calculation is needed.

It will take us years to decide whether Friston's theory is the true inheritor of Helmholtz's view of "perception as inference". What is certain, however, is that neuroscience now has a wealth of beautiful theories that should attract the attention of top-notch mathematicians — we will need them!


MARY CATHERINE BATESON
Cultural Anthropologist; President, Institute for Intercultural Studies; Author, Willing to Learn: Passages of Personal Discovery

Making and Changing Minds

We do not so much change our mind about facts, although we necessarily correct and rearrange them in changing contexts.  But we do change our minds about the significance of those facts.

I can remember, as a young woman, first grasping the danger of environmental destruction at a conference in 1968.  The context was the intricate interconnection within all living systems, a concept that applied to ecosystems like forests and tide pools and equally well to human communities and to the planet as a whole, the sense of an extraordinary interweaving of life, beautiful and fragile, and threatened by human hubris.  It was at that conference also that I first heard of the greenhouse effect, the mechanism that underlies global warming.  A few years later, however, I heard of the Gaia Hypothesis (proposed by James Lovelock in 1970), which proposed that the same systemic interconnectivity gives the planet its resilience and a capacity for self correction that might survive human tampering.  Some environmentalists welcomed the Gaia hypothesis, while others warned that it might lead to complacency in the face of real and present danger.  With each passing year, our knowledge of how things are connected is enriched but the significance of these observations is still debated.

J.B.S. Haldane was asked once what the natural world suggested about the mind of its Creator, and he replied "an inordinate fondness for beetles."  This observation also plays differently for different listeners — a delight in diversity, perhaps, as if the Creator might have spent the first sabbath afternoon resting from his work by playfully exploring the possible ramifications of a single idea (beetles make up roughly one fifth of all known species on the planet, some 350,000 of them) — or a humbling (or humiliating?) lack of preoccupation with our own unique kind, which might prove to be a temporary afterthought, survived only by cockroaches.

These two ways of looking at what we observe seem to recur, like the glass half full and the glass half empty.  The more we know of the detail of living systems, the more we seem torn between anxiety and denial on the one hand and wonder and delight on the other as we try to understand the significance of our knowledge.  Science has radically altered our awareness of the scale and age of the universe, but this changing awareness seems to stimulate humility in some — our planet a tiny speck dominated by flea-like bipeds — and a sort of megalomania in others who see all of this as directed toward us, our species, as its predestined masters.  Similarly, the exploration of human diversity in the twentieth century expanded for some the sense of plasticity and variability and for others reinforced the sense of human unity.  Even within these divergent emphases, for some the recognition of human unity includes a capacity for mutual recognition and adaptation while for others it suggests innate tendencies toward violence and xenophobia.   As we have slowly explored the mechanisms of memory and learning, we have seen examples of (fragile) human communities demoralized by exposure to other cultures and (resilient) examples of extraordinary adaptability. At one moment humans are depicted as potential stewards of the biosphere, at another as a cancer or a dangerous infestation.  The growing awareness of a shared and interconnected destiny has a shadow side, the version of globalization that looks primarily for profit.

We are having much the same sort of debate at present between those who see religion primarily as a source of conflict between groups and others who see the world's religions as potentially convergent systems that have knit peoples together and laid the groundwork for contemporary ideas of human rights and civil society.  Some believers feel called to treasure and respect the creation, including the many human cultures that have grown within it, while others regard differences of belief as sinful and the world we know as transitory or illusory.  Each of the great religions, with different language and different emphases, offers the basis for environmental responsibility and for peaceful coexistence and compassion, but believers differ in what they choose to emphasize, all too many choosing the apocalyptic over the ethical texts.  Nevertheless, major shifts have been occurring in the interpretation of information about climate change, most recently within the evangelical Christian community.   

My guess is that many people have tilted first one way and then the other over the past fifty years, as we have become increasingly aware of diverse understandings — surprised by accounts of human creativity and adaptation on the one hand, and distressed at the resurgence of ancient quarrels and loss of tolerance and mutual respect.  Some people are growing away from irresponsible consumerism while others are having their first taste of affluence.  Responses are probably partly based on temperament — generalized optimism vs. pessimism — so the tension will not be resolved by scientific findings.  But these responses are also based on the decisions we make, on making up our minds about which interpretations we choose to believe.  The world's historic religions deal in different ways with loss and the need for sacrifice, but the materials are there for working together, just as they are there for stoking conflict and competition.  We are most likely to survive this century if we decide to approach the choices and potential losses ahead with an awareness of the risks we face but at the same time with an awareness of the natural wonders around us and a determination to deal with each other with respect and faith in the possibility of cooperation and responsibility. 

WILLIAM CALVIN
Professor, The University of Washington School of Medicine; Author, A Brain For All Seasons

Greenland changed my mind

Back in 1968, when I first heard about global warming while visiting the Scripps Institute of Oceanography, almost everyone thought that serious problems were several centuries in the future. That's because no one realized how ravenous the world's appetite for coal and oil would become during a mere 40 years. They also thought that problems would develop slowly. Wrong again.
I tuned into abrupt climate change about 1984, when the Greenland ice cores showed big jumps in temperature and snowfall, stepping up and down in a mere decade but lasting centuries. I worried about global warming setting off another flip but I still didn't revise my notions about a slow time scale for the present greenhouse warming.

Greenland changed my mind. About 2004, the speedup of the Greenland glaciers made a lot of climate scientists revise their notions about how fast things were changing. When the summer earthquakes associated with glacial movement doubled and then redoubled in a mere ten years, it made me feel as if I was standing on shaky ground, that bigger things could happen at any time.
Then I saw the data on major floods and fires, steep increases every decade since 1950 and on all continents. That's not trouble moving around. It is called global climate change. It may not be abrupt but it's been fast.

For drought, which had been averaging about 15 percent of the world's land surface at any one time, there was a step up to a new baseline of 25 percent which occurred with the 1982 El Niño. That's not gradual change but an abrupt shift to a new global climate.

But the most sobering realization came when I was going through the Amazon drought data on the big El Niños in 1972, 1982, and 1997. Ten years ago, we nearly lost two of the world's three major tropical rain forests to fires. If that mega Nino had lasted two years instead of one, we could have seen the atmosphere's excess CO2 rise 40 percent over a few years — and likely an even bigger increase in our climate troubles. Furthermore, missing all of those green leaves to remove CO2 from the air, the annual bump up of CO2 concentration would have become half again as large. That's like the movie shifting into fast forward.

And we're not even back paddling as fast as we can, just drifting toward the falls. If I were a student or young professional, seeking my future being trashed, I'd be mad as hell. And hell is a pretty good metaphor for where we are heading if we don't get our act together. Quickly.


CAROLYN PORCO
Planetary Scientist; Cassini Imaging Science Team Leader; Director CICLOPS, Boulder CO; Adjunct Professor, University of Colorado

I've changed my mind about the manner in which our future on this planet might evolve.

I used to think that the power of science to dissect, inform, illuminate and clarify, its venerable record in improving the human condition, and its role in enabling the technological progress of the modern world were all so glaringly obvious that no one could reasonably question its hallowed position in human culture as the pre-eminent device for separating truth from falsehood.

I used to think that the edifice of knowledge constructed from thousands of years of scientific thought by various cultures all over the globe, and in particular the insights earned over the last 400 years from modern scientific methods, were so universally revered that we could feel comfortably assured of having permanently left our philistine days behind us.

And while I've always appreciated the need for care and perseverance in guiding public evaluation of the complexities of scientific discourse and its findings, I never expected that we would, at this stage in our development, have to justify and defend the scientific process itself.

Yet, that appears to be the case today. And now, I'm no longer sure that scientific inquiry and the cultural value it places on verifiable truth can survive without constant protection, and its ebb and flow over the course of human history affirms this. We have been beset in the past by dark ages, when scientific truths and the ideas that logically spring from them were systematically destroyed or made otherwise unavailable, when the practitioners of science were discredited, imprisoned, and even murdered. Periods of human enlightenment have been the exception throughout time, not the rule, and our language has acknowledged this: 'Two steps forward, one step back' neatly outlines the nonmonotonic stagger inherent in any reading of human history.

And, if we're not mindful, we could stagger again. When the truth becomes problematic, when intellectual honesty clashes with political expediency, when voices of reason are silenced to mere whisper, when fear alloys with ignorance to promote might over intelligence, integrity, and wisdom, the very practice of science can find itself imperiled. At that point, can darkness be far behind?

To avoid so dangerous a tipping point requires us, first and foremost, to recognize the distasteful possibility that it could happen again, at any time. I now suspect the danger will be forever present, the need for vigilance forever great.


BRIAN GOODWIN
Biologist, Schumacher College, Devon, UK; Author, How The Leopard Changed Its Spots

The Mechanical Worldview

I have changed my mind about the general validity of the mechanical worldview that underlies the modern scientific understanding of natural processes. Trained in biology and mathematics, I have used the scientific approach to the explanation of natural phenomena during most of my career. The basic assumption is that whatever properties and behaviours have emerged naturally during cosmic evolution can all be understood in terms of the motions and interactions of inanimate entities such as elementary particles, atoms, molecules, membranes and organelles, cells, organs, organisms, and so on.

Modelling natural processes on the basis of these assumptions has provided explanations for myriad natural phenomena ranging from planetary motion and electromagnetic phenomena to the properties and behaviour of nerve cell and the dynamic patterns that emerge in ant colonies or flocks of birds. There appeared to be no limit to the power of this explanatory procedure, which enchanted me and kept me busy throughout most of my scientific career in biology.

However, I have now come to the conclusion that this method of explaining natural phenomena has serious limitations, and that these come from the basic assumptions on which it is based. The crunch came for me with the "explanation" of qualitative experience in humans and other organisms. By this I mean the experience of pain or pleasure or wellbeing, or any other of the qualities that are very familiar to us.

These are described as "subjective", that is, experienced by a living organism, because they cannot be isolated from the subject experiencing them and measured quantitatively. What is often suggested as an explanation of this is evolutionary complexity: when an organism has a nervous system of sufficient complexity, subjective experience and feelings can arise. This implies that something totally new and qualitatively different can emerge from the interaction of "dead", unfeeling components such as cell membranes, molecules and electrical currents.

But this implies getting something from nothing, which violates what I have learned about emergent properties: there is always a precursor property for any phenomenon, and you cannot just introduce a new dimension into the phase space of your model to explain the result. Qualities are different from quantities and cannot be reduced to them.

So what is the precursor of the subjective experience that evolves in organisms? There must be some property of neurones or membranes or charged ions producing the electric associated with the experience of feeling that emerges in the organism.

One possibility is to acknowledge that the world isn't what modern science assumes it to be, mechanical and "dead", but that everything has some basic properties relating to experience or feeling. Philosophers and scientists have been down this route before, and have called this pan-sentience or pan-psychism: the world is impregnated with some form of feeling in every one of its constituents. This makes it possible for the evolution of complex organised beings such as organisms to develop feelings and for qualities to be as real as quantities.

Pan-sentience shifts science into radically new territory. Science can now be about qualities as well as quantities, helping us to recover quality of life, to heal our relationship to the natural world, and to undo the damage we are causing to the earth's capacity to continue its evolution with us. It could help us to recover our place as participants in a world that is not ours to control, but is ours to contribute to creatively, along with all the other diverse members of our living, feeling, planetary society.

LISA RANDALL
Physicist, Harvard University; Author, Warped Passages

When I first heard about the solar neutrino puzzle, I had a little trouble taking it seriously. We know that the sun is powered by a chain of nuclear reactions and that in addition to emitting energy these reactions lead to the emission of neutrinos (uncharged fundamental particles that interact only via the weak nuclear force). The original solar neutrino puzzle was that when physicists made experiments to find these neutrinos, none of them were detected. But by the time I learned about the puzzle, physicists had in fact observed solar neutrinos — only the amount they found was only about 1/3 - 1/2 of the amount that other physicists had predicted. But I was skeptical that this deficit was really a problem — how could we make such an accurate prediction about the sun — an object 93 million miles away about which we can measure only so much? To give one example, the prediction for the neutrino flux was strongly temperature-dependent.   Did we really know the temperature sufficiently accurately? Were we sure we understood heat transport inside the sun well enough to trust this prediction?

But I ended up changing my mind (along with many other initially skeptical physicists). The solar neutrino puzzle turned out to be a clue to some very interesting physics. It turns out that neutrinos mix. Every neutrino is labeled by the charged lepton with which it interacts via the weak nuclear force.  (Charged leptons are particles like electrons — there are two heavier versions known as muons and taus.)  It turns out the neutrinos have a bit of an identity crisis and can convert into each other as they travel through the sun and as they make their way to Earth.  An electron neutrino can change into a tau neutrino. Since detectors were looking only for electron neutrinos, they missed the ones that had converted. And that was the very elegant solution to the solar neutrino puzzle. The predictions based on what we knew about the Standard Model of particle physics (that tells us what are the fundamental particles and forces)  had been correct — hence change of mind #1. But the prediction had been inaccurate because no one had yet measured the masses and mixing angles of neutrinos. Subsequent experiments have searched for all types of neutrinos — not just electron neutrinos — and found the different neutrino types, thereby confirming the mixing.

And that leads me to a second thing I changed my mind about (along with much of the particle physics community). These neutrino mixing angles turned out to be big. That is, a significant fraction of electron neutrinos turn into muon neutrinos, and a big fraction of muon neutrinos turn into tau neutrinos (here it was neutrinos in the atmosphere that had gone missing).  Few physicists had thought these mixing angles would be big. That is because similar angles in the quark sector (quarks are particles such as the up and down quarks inside protons and neutrons that interact via the strong nuclear force) are much smaller. Everyone based their guess on what was already known. These big neutrino mixing angles were a real surprise — perhaps the biggest surprise from particle physics measurements since I started studying the field.

Why are these angles important? First of all neutrino mixing does in fact explain the missing neutrinos from the sun and from the atmosphere. But these angles are also are an important clue as to the nature of the fundamental particles of which all known matter is made.   One of the chief open questions about these particles is why there are three "copies" of the known particle types — that is heavier versions with identical charges?  Another is why do these different versions have different masses? And a third question is why do these particles mix in the way they have been measured to do? When we understand the answers to these questions we will have a much greater insight into the fundamental nature of all known matter. We don't know yet if we'll get the right answers but  these questions pose important challenges. And when we find the answer is is likely at this point that neutrinos will provide a clue. 


NICHOLAS CARR
Author, The Big Switch

The Radiant and Infectious Web

In January of 2007, China's president, Hu Jintao, gave a speech before a group of Communist Party officials. His subject was the Internet. "Strengthening network culture construction and management," he assured the assembled bureaucrats, "will help extend the battlefront of propaganda and ideological work. It is good for increasing the radiant power and infectiousness of socialist spiritual growth."

If I had read those words a few years earlier, they would have struck me as ludicrous. It seemed so obvious that the Internet stood in opposition to the kind of centralized power symbolized by China's regime. A vast array of autonomous nodes, not just decentralized but centerless, the Net was a technology of personal liberation, a force for freedom.

I now see that I was naive. Like many others, I mistakenly interpreted a technical structure as a metaphor for human liberty. In recent years, we have seen clear signs that while the Net may be a decentralized communications system, its technical and commercial workings actually promote the centralization of power and control. Look, for instance, at the growing concentration of web traffic. During the five years from 2002 through 2006, the number of Internet sites nearly doubled, yet the concentration of traffic at the ten most popular sites nonetheless grew substantially, from 31% to 40% of all page views, according to the research firm Compete.

Or look at how Google continues to expand its hegemony over web searching. In March 2006, the company's search engine was used to process a whopping 58% of all searches in the United States, according to Hitwise. By November 2007, the figure had increased yet again, to 65%. The results of searches are also becoming more, not less, homogeneous. Do a search for any common subject, and you're almost guaranteed to find Wikipedia at or near the top of the list of results. 

It's not hard to understand how the Net promotes centralization. For one thing, its prevailing navigational aids, such as search engine algorithms, form feedback loops. By directing people to the most popular sites, they make those sites even more popular. On the web as elsewhere, people stream down the paths of least resistance.

The predominant means of making money on the Net — collecting small sums from small transactions — also promotes centralization. It is only by aggregating vast quantities of content, data, and traffic that businesses can turn large profits. That's why companies like Microsoft and Google have been so aggressive in buying up smaller web properties. Google, which has been acquiring companies at the rate of about one a week, has disclosed that its ultimate goal is to "store 100% of user data."

As the dominant web companies grow, they are able to gain ever larger economies of scale through massive capital investments in the "server farms" that store and process online data. That, too, promotes consolidation and centralization. Executives of Yahoo and Sun Microsystems have recently predicted that control over the net's computing infrastructure will ultimately lie in the hands of five or six organizations.

To what end will the web giants deploy their power? They will, of course, seek to further their own commercial or political interests by monitoring, analyzing, and manipulating the behavior of "users." The connection of previously untethered computers into a single programmable system has created "a new apparatus of control," to quote NYU's Andrew Galloway. Even though the Internet has no center, technically speaking, control can be wielded, through software code, from anywhere. What's different, in comparison to the physical world, is that acts of control are more difficult to detect.

So it's not Hu Jintao who is deluded in believing that the net might serve as a powerful tool for central control. It is those who assume otherwise. I used to count myself among them. But I've changed my mind.


AUBREY de GREY
Gerontologist; chairman and chief science officer of the Methuselah Foundation; author, Ending Aging

Curiosity is addictive, and this is not an entirely good thing

The words "science" and "technology," or equivalently the words "research" and "development," are used in the same breath so readily that one might easily presume that they are joined at the hip: that their goals are indistinguishable, and that those who are good at one are, if not necessarily equally good at the other, at least quite good at evaluating the quality of work in the other. I grew up with this assumption, but the longer I work at the interface between science and technology the more I find myself having to accept that it is false — that most, scientists are rather poor at the type of thinking that identifies efficient new ways to get things done, and that, likewise, technologists are mostly not terribly good at identifying efficient ways to find things out.

I've come to feel that there are several reasons underlying this divide.

A major one is the divergent approaches of scientists and technologists to the use of evidence. In basic research, it is exceptionally easy to be seduced by one's data — to see a natural interpretation of it and to overlook the existence of other, comparably economical interpretations of it that lead to dramatically different conclusions. It therefore makes sense for scientists to give the greatest weight, when evaluating the evidence for and against a given hypothesis, to the most direct observational or experimental evidence at hand.

Technologists, on the other hand, succeed best when they stand back from the task before them, thinking laterally about ways in which ostensibly irrelevant techniques might be applied to solve one or another component of the problem. The technologist's approach, when applied to science, is likely to result all too often in wasted time, as experiments are performed that contain too many departures from previous work to allow the drawing of firm conclusions either way concerning the hypothesis of interest.

Conversely, applying the scientist's methodology to technological endeavours can also result in wasted time, resulting from overly small steps away from techniques already known to be futile, like trying to fly by flapping mechanical wings.

But there's another difference between the characteristic mindsets of scientists and technologists, and I've come to view it as the most problematic. Scientists are avowedly "curiosity-driven" rather than "goal-directed" — they are spurred by the knowledge that, throughout the history of civilisation, innumerable useful technologies have become possible not through the stepwise execution of a predefined plan, but rather through the purposely undirected quest for knowledge, letting a dynamically-determined sequence of experiments lead where it may.

That logic is as true as it ever was, and any technologist who doubts it need only examine the recent history of science to change his mind. However, it can be — and, in my view, all too often is — taken too far. A curiosity-driven sequence of experiments is useful not because of the sequence, but because of the technological opportunities that emerge at the end of the sequence. The sequence is not an end in itself. And this is rather important to keep in mind. Any scientist, on completing an experiment, is spoilt for choice concerning what experiment to do next — or, more prosaically, concerning what experiment to apply for funding to do next.

The natural criterion for making this choice is the likelihood that the experiment will generate a wide range of answers to technologically important questions, thereby providing new technological opportunities. But an altogether more frequently adopted criterion, in practice, is that the experiment will generate a wide range of new questions — new reasons to do more experiments. This is only indirectly useful, and I believe that in practice it is indeed less frequently useful than programs of research designed with one eye on the potential for eventual technological utility.

Why, then, is it the norm? Simply because it is the more attractive to those who are making these decisions — the curiosity-driven scientists (whether the grant applicants or the grant reviewers) themselves. Curiosity is addictive: both emotionally and in their own enlightened self-interest, scientists want reasons to do more science, not more technology. But as a society we need science to be as useful as possible, as quickly as possible, and this addiction slows us down.


HELENA CRONIN
Philosopher, London School of Economics; director and founder [email protected]; author, The Ant and the Peacock

More dumbbells but more Nobels: Why men are at the top

What gives rise to the most salient, contested and misunderstood of sex differences… differences that see men persistently walk off with the top positions and prizes, whether influence or income, whether heads of state or CEOs… differences that infuriate feminists, preoccupy policy-makers, galvanize legislators and spawn 'diversity' committees and degrees in gender studies?

I used to think that these patterns of sex differences resulted mainly from average differences between men and women in innate talents, tastes and temperaments. After all, in talents men are on average more mathematical, more technically minded, women more verbal; in tastes, men are more interested in things, women in people; in temperaments, men are more competitive, risk-taking, single-minded, status-conscious, women far less so. And therefore, even where such differences are modest, the distribution of these 3 Ts among males will necessarily be different from that among females — and so will give rise to notable differences between the two groups. Add to this some bias and barriers — a sexist attitude here, a lack of child-care there. And the sex differences are explained. Or so I thought.

But I have now changed my mind. Talents, tastes and temperaments play fundamental roles. But they alone don't fully explain the differences. It is a fourth T that most decisively shapes the distinctive structure of male — female differences. That T is Tails — the tails of these statistical distributions. Females are much of a muchness, clustering round the mean. But, among males, the variance — the difference between the most and the least, the best and the worst — can be vast. So males are almost bound to be over-represented both at the bottom and at the top. I think of this as 'more dumbbells but more Nobels'.

Consider the mathematics sections in the USA's National Academy of Sciences: 95% male. Which contributes most to this predominance — higher means or larger variance? One calculation yields the following answer. If the sex difference between the means was obliterated but the variance was left intact, male membership would drop modestly to 91%. But if the means were left intact but the difference in the variance was obliterated, male membership would plummet to 64%. The overwhelming male predominance stems largely from greater variance.

Similarly, consider the most intellectually gifted of the USA population, an elite 1%. The difference between their bottom and top quartiles is so wide that it encompasses one-third of the entire ability range in the American population, from IQs above 137 to IQs beyond 200. And who's overwhelmingly in the top quartile? Males. Look, for instance, at the boy:girl ratios among adolescents for scores in mathematical-reasoning tests: scores of at least 500, 2:1; scores of at least 600, 4:1; scores of at least 700, 13.1.

Admittedly, those examples are writ large — exceptionally high aptitude and a talent that strongly favours males and with a notably long right-hand tail. Nevertheless, the same combined causes — the forces of natural selection and the facts of statistical distribution — ensure that this is the default template for male-female differences.

Let's look at those causes. The legacy of natural selection is twofold: mean differences in the 3 Ts and males generally being more variable; these two features hold for most sex differences in our species and, as Darwin noted, greater male variance is ubiquitous across the entire animal kingdom. As to the facts of statistical distribution, they are three-fold … and watch what happens at the end of the right tail: first, for overlapping bell-curves, even with only a small difference in the means, the ratios become more inflated as one goes further out along the tail; second, where there's greater variance, there's likely to be a dumbbells-and-Nobels effect; and third, when one group has both greater mean and greater variance, that group becomes even more over-represented at the far end of the right tail.

The upshot? When we're dealing with evolved sex differences, we should expect that the further out we go along the right curve, the more we will find men predominating. So there we are: whether or not there are more male dumbbells, there will certainly be — both figuratively and actually — more male Nobels.

Unfortunately, however, this is not the prevailing perspective in current debates, particularly where policy is concerned. On the contrary, discussions standardly zoom in on the means and blithely ignore the tails. So sex differences are judged to be small. And thus it seems that there's a gaping discrepancy: if women are as good on average as men, why are men overwhelmingly at the top? The answer must be systematic unfairness — bias and barriers. Therefore, so the argument runs, it is to bias and barriers that policy should be directed. And so the results of straightforward facts of statistical distribution get treated as political problems — as 'evidence' of bias and barriers that keep women back and sweep men to the top. (Though how this explains the men at the bottom is an unacknowledged mystery.)

But science has given us biological insights, statistical rules and empirical findings … surely sufficient reason to change one's mind about men at the top.



< previous

| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

next >


John Brockman, Editor and Publisher
Russell Weinberger, Associate Publisher

contact: [email protected]
Copyright © 2008 by
Edge Foundation, Inc
All Rights Reserved.
|Top|