| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

 




2008

"WHAT HAVE YOU CHANGED YOUR MIND ABOUT?"

LEWIS WOLPERT
Professor of Biology, University College; Author, Six Impossible Things To Do Before Breakfast


On Pattern Formation

I have more many years worked on pattern in the developing embryo formation which is the development of spatial organization as seen, for example in the arm and hand. My main model for pattern formation is based on cells acquiring a positional value. The model proposes that cells have their position specified as in a co-ordinate system and this determines, depending on their developmental history and their genetic constitution what they do.

The development of the chick limb illustrates some of the problems. As the wing grows out from the flank up there is a thickened ridge of the covering sheet of cells that secretes special proteins which we think, and this is controversial, specifies a region in the cells beneath the ridge which we call the progress zone. At the posterior margin of the limb is the polarising region which secretes a protein, Sonic Hedgehog. This is a signaling molecule used again and again in the development of the embryo. The normal pattern of digits in the chick wing is 2, 3, and 4. If another polarising region is grafted to the anterior margin the pattern of digits is 4, 3, 2, 2, 3, 4.

The interpretation is that Sonic Hedgehog sets up a gradient which specifies position and with the graft there is a mirror image gradient and Crick suggested this was due to diffusion of a molecule like sonic hedgehog setting up a gradient. We have worked hard to show that this model is correct.

The best evidence that it maybe gradient is that if just a small amount of Sonic Hedgehog in the anterior margin then you just get an extra digit 2. If one increases it a bit put a little bit more and you get a 3, 2. But is there really a diffusible gradient in Sonic Hedgehog specifying position? The situation is much more complex.

We now think that the model is wrong as diffusion of a molecule is far too unreliable for reliably and accurately specifying positional values. The reasons why we think diffusion cannot work is that there is now good evidence that a diffusing molecule has to go between and even into cells, and interact with extracellular molecules making it totally unreliable. A more attractive model might be based on interactions at cell contacts as in the polarity models proposed by others. Position would be specified by cells talking to each other.

This is a serious change in my thinking.


RAY KURZWEIL
Inventor and Technologist; Author, The Singularity Is Near: When Humans Transcend Biology

SETI

I've come to reject the common "SETI" (search for extraterrestrial intelligence) wisdom that there must be millions of technology capable civilizations within our "light sphere" (the region of the Universe accessible to us by electromagnetic communication).  The Drake formula provides a means to estimate the number of intelligent civilizations in a galaxy or in the universe. Essentially, the likelihood of a planet evolving biological life that has created sophisticated technology is tiny, but there are so many star systems, that there should still be many millions of such civilizations. Carl Sagan's analysis of the Drake formula concluded that there should be around a million civilizations with advanced technology in our galaxy, while Frank Drake estimated around 10,000. And there are many billions of galaxies. Yet we don't notice any of these intelligent civilizations, hence the paradox that Fermi described in his famous comment. So where is everyone?

We can readily explain why any one of these civilizations might be quiet. Perhaps it destroyed itself. Perhaps it is following the Star Trek ethical guideline to avoid interference with primitive civilizations (such as ours). These explanations make sense for any one civilization, but it is not credible, in my view, that every one of the billions of technology capable civilizations that should exist has destroyed itself or decided to remain quiet. 

The SETI project is sometimes described as trying to find a needle (evidence of a technical civilization) in a haystack (all the natural signals in the universe). But actually, any technologically sophisticated civilization would be generating trillions of trillions of needles (noticeably intelligent signals). Even if they have switched away from electromagnetic transmissions as a primary form of communication, there would still be vast artifacts of electromagnetic phenomenon generated by all of the many computational and communication processes that such a civilization would need to engage in.  

Now let's factor in what I call the "law of accelerating returns" (the inherent exponential growth of information technology). The common wisdom (based on what I call the intuitive linear perspective) is that it would take many thousands, if not millions of years, for an early technological civilization to become capable of technology that spanned a solar system. But because of the explosive nature of exponential growth, it will only take a quarter of a millennium (in our own case) to go from sending messages on horseback to saturating the matter and energy in our solar system with sublimely intelligent processes.

The price-performance of computation went from 10-5 to 108 cps per thousand dollars in the 20th century. We also went from about a million dollars to a trillion dollars in the amount of capital devoted to computation, so overall progress in nonbiological intelligence went from 10-2 to 1017 cps in the 20th century, which is still short of the human biological figure of 1026 cps. By my calculations, however, we will achieve around 1069 cps by the end of the 21st century, thereby greatly multiplying the intellectual capability of our human-machine civilization.  Even if we find communication methods superior to electromagnetic transmissions we will nonetheless be generating an enormous number of intelligent electromagnetic signals. 

According to most analyses of the Drake equation, there should be billions of civilizations, and a substantial fraction of these should be ahead of us by millions of years. That's enough time for many of them to be capable of vast galaxy-wide technologies. So how can it be that we haven't noticed any of the trillions of trillions of "needles" that each of these billions of advanced civilizations should be creating?

My own conclusion is that they don't exist. If it seems unlikely that we would be in the lead in the universe, here on the third planet of a humble star in an otherwise undistinguished galaxy, it's no more perplexing than the existence of our universe with its ever so precisely tuned formulas to allow life to evolve in the first place.


MARK HENDERSON
Science Editor. The Times, London

Consulting the public about science isn't always a waste of time — but consulting bioethicists often is

I used to take the view that public consultations about science policy were pointless. While the idea of asking ordinary people's opinions about controversial research sounds quite reasonable, it is astonishingly difficult to do well.

When governments canvass about issues such as biotechnology or embryo research, what usually happens is that the whole exercise gets captured by special interests.

A vocal minority with strong opinions that are already widely known and impervious to argument — think Greenpeace and the embryo-rights lobby — get their responses in early and often. The much larger proportion of people who consider themselves neutral, open to persuasion, uninformed or uninterested rarely bother to take part. Public opinion is then deemed to have spoken, without reflecting true public opinion at all. Wouldn't it be better, I thought, to let scientists get on with their research, subject to occasional oversight by specialist panels with appropriate ethical expertise?

Well, to a point. Public consultations can indeed be worse than useless, particularly when the British Government has done the consulting: its exercises on GM crops and embryo research laws were particularly ill-judged. As Sir David King said recently, they have taught us what not to do.

Their failure, though, has stimulated some interesting thinking that has convinced me that it is possible to engage ordinary people in quite complex scientific issues, without letting the usual suspects shout everybody else down.

The Human Fertilisation and Embryology Authority's recent work on cytoplasmic hybrid embryos is a case in point. The traditional part of the exercise had familiar results: pro-lifers and anti-genetic engineering groups mobilised, so 494 of the 810 written submissions were hostile. Careful questioning, however, established that almost all these came from people who oppose all embryo research in all circumstances.

A more scientific poll found 61 per cent backing for interspecies embryos, if these were to be used for medical research. Detailed deliberative workshops revealed that once the rationale for the experiments was properly explained, large majorities overcame "instinctive repulsion" and supported the work.

If consultations are properly run in this way, there is a lot to be said for them. They can actually build public understanding of potentially controversial research, and shoot the fox of science's shrillest critics.

In many ways, they are rather more helpful than seeking advice from bioethicists, whose importance to ethical research I've increasingly come to doubt. It's not that philosophy of science is not a worthwhile academic discipline — it can be stimulating and thought-provoking. The problem is that a bioethicist can almost always be found to support any position.

Leon Kass and John Harris are both eminent bioethicists, yet the counsel you would expect them to give on embryo research laws is going to be rather different. Politicians — or scientists — can and do deliberately appoint ethicists according to their pre-existing world views, then trumpet their advice as somehow independent and authoritative, as if their subject were physics.

If specialist bioethics has a role to play in regulation of science, it is in framing the questions that researchers and the public at large should consider. It can't just be a fig leaf for decisions people were always going to make anyway.


DAVID GOODHART
Founder & Editor, Prospect Magazine

The nation state is too big for the local things, too small for the international things and the root of most of the world's ills

This was a central part of liberal baby-boomer common sense when I was growing up, especially if you came from a (still) dominant country like Britain. Moreover to show any sense of national feeling — apart from contempt for your national traditions — was a sign that you lacked political sophistication.

I now believe this is mainly nonsense. Nationalism can, of course, be a destructive force and we were growing up in the shadow of its 19th and 20th century excesses. In reaction to that most of the civilized world had, by the mid 20th century, signed up to a liberal universalism (as embodied in the UN charter) that stressed the moral equality of all humans. I am happy to sign up to that too, of course, but I now no longer see that commitment as necessarily conflicting with belief in the nation state. Indeed I think many anti-national liberals make a sort of category error — belief in the moral equality of all humans does not mean that we have the same obligations to all humans. Membership of the political community of a modern nation state places quite onerous duties on us to obey laws and pay taxes, but also grants us many rights and freedoms — and they make our fellow citizens politically "special" to us in a way that citizens of other countries are not. This "specialness" of national citizenship is most vividly illustrated in the factoid that every year in Britain we spend 25 times more on the National Health Service than we do on development aid.

Moreover if the nation state can be a destructive force it is also at the root of what many liberals hold dear: representative democracy, accountability, the welfare state, redistribution of wealth and the very idea of equal citizenship. None of these things have worked to any significant extent beyond the confines of the nation state, which is not to say that they couldn't at some point in the future (indeed they already do so to a small extent in the EU). If you look around at the daily news — contested elections in Kenya, death in Pakistan — most of the bad news these days comes from too little nation state not too much. And why was rapid economic development possible in the Asian tigers but not in Africa? Surely the existence of well functioning nation states and a strong sense of national solidarity in the tigers had something to do with it.

And in rich western countries as other forms of human solidarity — social class, religion, ethnicity and so on — have been replaced by individualism and narrower group identities, holding on to some sense of national solidarity remains more important than ever to the good society. A feeling of empathy towards strangers who are fellow citizens (and with whom one shares history, institutions and social and political obligations) underpins successful modern states, but this need not be a feeling that stands in the way of empathy towards all humans. It just remains true that charity begins at home.


W.DANIEL HILLIS
Physicist, Computer Scientist; Chairman, Applied Minds, Inc.; Author, The Pattern on the Stone

Try the Experiment Yourself

As a child, I was told that hot water freezes faster than cold water. This was easy to refute in principle, so I did not believe it.

Many years later I learned that Aristotle had described the effect in his Meteorologica,

"The fact that the water has previously been warmed contributes to its freezing quickly: for so it cools sooner. Hence many people, when they want to cool hot water quickly, begin by putting it in the sun. So the inhabitants of Pontus when they encamp on the ice to fish (they cut a hole in the ice and then fish) pour warm water round their reeds that it may freeze the quicker, for they use the ice like lead to fix the reeds. " (E. W. Webster translation)

I was impressed as always by Aristotle's clarity, confidence and specificity. Of course, I do not expect you to be convinced that it is true simply because Aristotle said so, especially since his explanation is that "warm and cold react upon one another by recoil." (Aristotle, like us, was very good making up explanations to justify his beliefs). Instead, I hope that you will have the pleasure of being convinced, as I was, by trying the experiment yourself.


NASSIM TALEB
Epistemologist of Randomness and Applied Statistician; Author, The Black Swan

The Irrelevance of "Probability"

I spent a long time believing in the centrality of probability in life and advocating that we should express everything in terms of degrees of credence, with unitary probabilities as a special case for total certainties, and null for total implausibility. Critical thinking, knowledge, beliefs, everything needed to be probabilized. Until I came to realize, twelve years ago, that I was wrong in this notion that the calculus of probability could be a guide to life and help society. Indeed, it is only in very rare circumstances that probability (by itself) is a guide to decision making . It is a clumsy academic construction, extremely artificial, and nonobservable. Probability is backed out of decisions; it is not a construct to be handled in a standalone way in real-life decision-making. It has caused harm in many fields.

Consider the following statement. "I think that this book is going to be a flop. But I would be very happy to publish it." Is the statement incoherent? Of course not: even if the book was very likely to be a flop, it may make economic sense to publish it (for someone with deep pockets and the right appetite) since one cannot ignore the small possibility of a handsome windfall, or the even smaller possibility of a huge windfall. We can easily see that when it comes to small odds, decision making no longer depends on the probability alone. It is the pair probability times payoff (or a series of payoffs), the expectation, that matters. On occasion, the potential payoff can be so vast that it dwarfs the probability — and these are usually real world situations in which probability is not computable.

Consequently, there is a difference between knowledge and action. You cannot naively rely on scientific statistical knowledge (as they define it) or what the epistemologists call "justified true belief" for non-textbook decisions. Statistically oriented modern science is typically based on Right/Wrong with a set confidence level, stripped of consequences. Would you take a headache pill if it was deemed effective at a 95% confidence level? Most certainly. But would you take the pill if it is established that it is "not lethal" at a 95% confidence level? I hope not.

When I discuss the impact of the highly improbable ("black swans"), people make the automatic mistake of thinking that the message is that these "black swans" are necessarily more probable than assumed by conventional methods. They are mostly less probable. Consider that, in a winner-take-all environment such as the one in the arts, the odds of success are low, since there are fewer successful people, but the payoff is disproportionately high. So, in a fat tailed environment, what I call "Extremistan", rare events are less frequent (their probability is lower), but they are so effective that their contribution to the total pie is more substantial.

[Technical note: the distinction is, simply, between raw probability, P[x>K], i.e. the probability of exceeding K, and E[x|x>K], the expectation of x conditional on x>K. It is the difference between the zeroth moment and the first moment. The latter is what usually matters for decisions. And it is the (conditional) first moment that needs to be the core of decision making. What I saw in 1995 was that an out-of-the-money option value increases when the probability of the event decreases, making me feel that everything I thought until then was wrong.]

What causes severe mistakes is that, outside the special cases of casinos and lotteries, you almost never face a single probability with a single (and known) payoff. You may face, say, a 5% probability of an earthquake of magnitude 3 or higher, a 2% probability of one of 4 or higher, etc. The same with wars: you have a risk of different levels of damage, each with a different probability. "What is the probability of war?" is a meaningless question for risk assessment.

So it is wrong to just look at a single probability of a single event in cases of richer possibilities (like focusing on such questions as "what is the probability of losing a million dollars?" while ignoring that , conditional on losing more than a million dollars, you may have an expected loss of twenty million, one hundred million, or just one million). Once again, real life is not a casino with simple bets. This is the error that helps the banking system go bust with an astonishing regularity — I've showed that institutions that are exposed to negative black swans, like banks and some classes of insurance ventures, have almost never profitable over long periods. The problem of the illustrative current subprime mess is not so much that the "quants" and other pseudo-experts in bank risk management were wrong about the probabilities (they were), but that they were severely wrong about the different layers of depth of potential negative outcomes. For instance, Morgan Stanley lost about ten billion dollars (so far) while allegedly having foreseen a subprime crisis and executed hedges against it — they just did not realize how deep it would go and had open exposure to the big tail risks. This is routine: a friend who went bust during the crash of 1987, told me: "I was betting that it would happen but I did not know it would go that far".

The point is mathematically simple but does not register easily. I've enjoyed giving math students the following quiz (to be answered intuitively, on the spot). In a Gaussian world, the probability of exceeding one standard deviations is ~16%. What are the odds of exceeding it under a distribution of fatter tails (with same mean and variance)? The right answer: lower, not higher — the number of deviations drops, but the few that take place matter more. It was entertaining to see that most of the graduate students get it wrong. Those who are untrained in the calculus of probability have a far better intuition of these matters.

Another complication is that just as probability and payoff are inseparable, so one cannot extract another complicated component, utility, from the decision-making equation. Fortunately, the ancients with all their tricks and accumulated wisdom in decision-making, knew a lot of that, at least better than modern-day probability theorists. Let us stop to systematically treat them as if they were idiots. Most texts blame the ancients for their ignorance of the calculus of probability — the Babylonians, Egyptians, and Romans in spite of their engineering sophistication, and the Arabs, in spite of their taste for mathematics, were blamed for not having produced a calculus of probability (the latter being, incidentally, a myth, since Umayyad scholars used relative word frequencies to determine authorships of holy texts and decrypt messages). The reason was foolishly attributed to theology, lack of sophistication, lack of something people call the "scientific method", or belief in fate. The ancients just made decisions in a more ecologically sophisticated manner than modern epistemology minded people. They integrated skeptical Pyrrhonian empiricism into decision making. As I said, consider that belief (i.e., epistemology) and action (i.e., decision-making), the way they are practiced, are largely not consistent with one another.

Let us apply the point to the current debate on carbon emissions and climate change. Correspondents keep asking me if it the climate worriers are basing their claims on shoddy science, and whether, owing to nonlinearities, their forecasts are marred with such a possible error that we should ignore them. Now, even if I agreed that it were shoddy science; even if I agreed with the statement that the climate folks were most probably wrong, I would still opt for the most ecologically conservative stance — leave planet earth the way we found it. Consider the consequences of the very remote possibility that they may be right, or, worse, the even more remote possibility that they may be extremely right.


DANIEL KAHNEMAN
Psychologist, Princeton; Recipient, 2002 Nobel Prize in Economic Sciences

The sad tale of the aspiration treadmill

The central question for students of well-being is the extent to which people adapt to circumstances.  Ten years ago the generally accepted position was that there is considerable hedonic adaptation to life conditions. The effects of circumstances on life satisfaction appeared surprisingly small: the rich were only slightly more satisfied with their lives than the poor, the married were happier than the unmarried but not by much, and neither age nor moderately poor health diminished life satisfaction.  Evidence that people adapt — though not completely — to becoming paraplegic or winning the lottery supported the idea of a "hedonic treadmill": we move but we remain in place.  The famous "Easterlin paradox" seemed to nail it down:  Self-reported life satisfaction has changed very little in prosperous countries over the last fifty years, in spite of large increases in the standard of living.

Hedonic adaptation is a troubling concept, regardless of where you stand on the political spectrum.  If you believe that economic growth is the key to increased well-being, the Easterlin paradox is bad news.  If you are a compassionate liberal, the finding that the sick and the poor are not very miserable takes wind from your sails.   And if you hope to use a measure of well-being to guide social policy you need an index that will pick up permanent effects of good policies on the happiness of the population. 

About ten years ago I had an idea that seemed to solve these difficulties: perhaps people's satisfaction with their life is not the right measure of well-being.  The idea took shape in discussions with my wife Anne Treisman, who was (and remains) convinced that people are happier in California (or at least Northern California) than in most other places.  The evidence showed that Californians are not particularly satisfied with their life, but Anne was unimpressed.  She argued that Californians are accustomed to a pleasant life and come to expect more pleasure than the unfortunate residents of other states.  Because they have a high standard for what life should be, Californians are not more satisfied than others, although they are actually happier.  This idea included a treadmill, but it was not hedonic – it was an aspiration treadmill: happy people have high aspirations.  

The aspiration treadmill offered an appealing solution to the puzzles of adaptation: it suggested that measure of life satisfaction underestimate the well-being benefits of life circumstances such as income, marital status or living in California.  The hope was that measures of experienced happiness would be more sensitive.  I eventually assembled an interdisciplinary team to develop a measure of experienced happiness (Kahneman, Krueger, Schkade, Stone and Schwarz, 2004) and we set out to demonstrate the aspiration treadmill.   Over several years we asked substantial samples of women to reconstruct a day of their life in detail.  They indicated the feelings they had experienced during each episode, and we computed a measure of experienced happiness: the average quality of affective experience during the day.  Our hypothesis was that differences in life circumstances would have more impact on this measure than on life satisfaction.  We were so convinced that when we got our first batch of data, comparing teachers in top-rated schools to teachers in inferior schools, we actually misread the results as confirming our hypothesis.  In fact, they showed the opposite: the groups of teachers differed more in their work satisfaction than in their affective experience at work. This was the first of many such findings: income, marital status and education all influence experienced happiness less than satisfaction, and we could show that the difference is not a statistical artifact.  Measuring experienced happiness turned out to be interesting and useful, but not in the way we had expected.  We had simply been wrong.

Experienced happiness, we learned, depends mainly on personality and on the hedonic value of the activities to which people allocate their time.  Life circumstances influence the allocation of time, and the hedonic outcome is often mixed: high-income women have more enjoyable activities than the poor, but they also spend more time engaged in work that they do not enjoy; married women spend less time alone, but more time doing tedious chores.  Conditions that make people satisfied with their life do not necessarily make them happy. 

Social scientists rarely change their minds, although they often adjust their position to accommodate inconvenient facts. But it is rare for a hypothesis to be so thoroughly falsified.  Merely adjusting my position would not do; although I still find the idea of an aspiration treadmill attractive, I had to give it up.

To compound the irony, recent findings from the Gallup World Poll raise doubts about the puzzle itself.  The most dramatic result is that when the entire range of human living standards is considered, the effects of income on a measure of life satisfaction (the "ladder of life") are not small at all.  We had thought income effects are small because we were looking within countries.  The GDP differences between countries are enormous, and highly predictive of differences in life satisfaction.  In a sample of over 130,000 people from 126 countries, the correlation between the life satisfaction of individuals and the GDP of the country in which they live was over .40 – an exceptionally high value in social science.  Humans everywhere, from Norway to Sierra Leone, apparently evaluate their life by a common standard of material prosperity, which changes as GDP increases. The implied conclusion, that citizens of different countries do not adapt to their level of prosperity, flies against everything we thought we knew ten years ago.  We have been wrong and now we know it.  I suppose this means that there is a science of well-being, even if we are not doing it very well.



< previous

| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 |

 


John Brockman, Editor and Publisher
Russell Weinberger, Associate Publisher

contact: [email protected]
Copyright © 2008 by
Edge Foundation, Inc
All Rights Reserved.
|Top|