Edge 113— April 2, 2003

(8,000 words)


What interests me is the question of how humans learn to live with uncertainty. Before the scientific revolution determinism was a strong ideal. Religion brought about a denial of uncertainty, and many people knew that their kin or their race was exactly the one that God had favored. They also thought they were entitled to get rid of competing ideas and the people that propagated them. How does a society change from this condition into one in which we understand that there is this fundamental uncertainty? How do we avoid the illusion of certainty to produce the understanding that everything, whether it be a medical test or deciding on the best cure for a particular kind of cancer, has a fundamental element of uncertainty?



On February 27, 2003, Edge Foundation, Inc. celebrated the 6th anniversary of Edge at the "Edge Science Dinner" (formerly known as "The Billionaires' Dinner") at Cibo's Restaurant, in Monterey, California.

Sergei Brin, Google; Ronna Tanenbaum. Alexa; Mackenzie & Jeff Bezos, Amazon

Freeman Dyson
Jared Diamond
Sarah Kellen
Max Brockman

Daniel C. Dennett
Steven Pinker





"Isn’t more information always better?" asks Gerd Gigerenzer. "Why else would bestsellers on how to make good decisions tell us to consider all pieces of information, weigh them carefully, and compute the optimal choice, preferably with the aid of a fancy statistical software package? In economics, Nobel prizes are regularly awarded for work that assumes that people make decisions as if they had perfect information and could compute the optimal solution for the problem at hand. But how do real people make good decisions under the usual conditions of little time and scarce information? Consider how players catch a ball—in baseball, cricket, or soccer. It may seem that they would have to solve complex differential equations in their heads to predict the trajectory of the ball. In fact, players use a simple heuristic. When a ball comes in high, the player fixates the ball and starts running. The heuristic is to adjust the running speed so that the angle of gaze remains constant —that is, the angle between the eye and the ball. The player can ignore all the information necessary to compute the trajectory, such as the ball’s initial velocity, distance, and angle, and just focus on one piece of information, the angle of gaze."

Gigerenzer provides an alternative to the view of the mind as a cognitive optimizer, and also to its mirror image, the mind as a cognitive miser. The fact that people ignore information has been often mistaken as a form of irrationality, and shelves are filled with books that explain how people routinely commit cognitive fallacies. In seven years of research, he, and his research team at Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin, have worked out what he believes is a viable alternative: the study of fast and frugal decision-making, that is, the study of smart heuristics people actually use to make good decisions. In order to make good decisions in an uncertain world, one sometimes has to ignore information. The art is knowing what one doesn’t have to know.

Gigerenzer's work is of importance to people interested in how the human mind actually solves problems. In this regard his work is influential to psychologists, economists, philosophers, and animal biologists, among others. It is also of interest to people who design smart systems to solve problems; he provides illustrations on how one can construct fast and frugal strategies for coronary care unit decisions, personnel selection, and stock picking.

"My work will, I hope, change the way people think about human rationality", he says. "Human rationality cannot be understood, I argue, by the ideals of omniscience and optimization. In an uncertain world, there is no optimal solution known for most interesting and urgent problems. When human behavior fails to meet these Olympian expectations, many psychologists conclude that the mind is doomed to irrationality. These are the two dominant views today, and neither extreme of hyper-rationality or irrationality captures the essence of human reasoning. My aim is not so much to criticize the status quo, but rather to provide a viable alternative."


GERD GIGERENZER is Director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin and former Professor of Psychology at the University of Chicago. He won the AAAS Prize for the best article in the behavioral sciences. He is the author of Calculated Risks: How To Know When Numbers Deceive You, the German translation of which won the Scientific Book of the Year Prize in 2002. He has also published two academic books on heuristics, Simple Heuristics That Make Us Smart (with Peter Todd & The ABC Research Group) and Bounded Rationality: The Adaptive Toolbox (with Reinhard Selten, a Nobel laureate in economics).

Gerd Gigernezer 's Edge Bio Page


At the beginning of the 20th century the father of modern science fiction, Herbert George Wells, said in his writings on politics, "If we want to have an educated citizenship in a modern technological society, we need to teach them three things: reading, writing, and statistical thinking." At the beginning of the 21st century, how far have we gotten with this program? In our society, we teach most citizens reading and writing from the time they are children, but not statistical thinking. John Alan Paulos has called this phenomenon innumeracy.

There are many stories documenting this problem. For instance, there was the weather forecaster who announced on American TV that if the probability that it will rain on Saturday is 50 percent and the probability that it will rain on Sunday is 50 percent, the probability that it will rain over the weekend is 100 percent. In another recent case reported by New Scientist an inspector in the Food and Drug Administration visited a restaurant in Salt Lake City famous for its quiches made from four fresh eggs. She told the owner that according to FDA research every fourth egg has salmonella bacteria, so the restaurant should only use three eggs in a quiche. We can laugh about these examples because we easily understand the mistakes involved, but there are more serious issues. When it comes to medical and legal issues, we need exactly the kind of education that H. G. Wells was asking for, and we haven't gotten it.

What interests me is the question of how humans learn to live with uncertainty. Before the scientific revolution determinism was a strong ideal. Religion brought about a denial of uncertainty, and many people knew that their kin or their race was exactly the one that God had favored. They also thought they were entitled to get rid of competing ideas and the people that propagated them. How does a society change from this condition into one in which we understand that there is this fundamental uncertainty? How do we avoid the illusion of certainty to produce the understanding that everything, whether it be a medical test or deciding on the best cure for a particular kind of cancer, has a fundamental element of uncertainty?

For instance, I've worked with physicians and physician-patient associations to try to teach the acceptance of uncertainty and the reasonable way to deal with it. Take HIV testing as an example. Brochures published by the Illinois Department of Health say that testing positive for HIV means that you have the virus. Thus, if you are an average person who is not in a particular risk group but test positive for HIV, this might lead you to choose to commit suicide, or move to California, or do something else quite drastic. But AIDS information in many countries is running on the illusion of certainty. The actual situation is rather like this: If you have about 10,000 people who are in no risk group, one of them will have the virus, and will test positive with practical certainty. Among the other 9,999, another one will test positive, but it's a false positive. In this case we have two who test positive, although only one of them actually has the virus. Knowing about these very simple things can prevent serious disasters, of which there is unfortunately a record.

Still, medical societies, individual doctors, and individual patients either produce the illusion of certainty or want it. Everyone knows Benjamin Franklin's adage that there is nothing certain in this world except death and taxes, but the doctors I interviewed tell me something different. They say, "If I would tell my patients what we don't know, they would get very nervous, so it's better not to tell them." Thus, this is one important area in which there is a need to get people — including individual doctors or lawyers in court — to be mature citizens and to help them understand and communicate risks.

Representation of information is important. In the case of many so-called cognitive illusions, the problem results from difficulties that arise from getting along with probabilities. The problem largely disappears the moment you give the person the information in natural frequencies. You basically put the mind back in a situation where it's much easier to understand these probabilities. We can prove that natural frequencies can facilitate actual computations, and have known for a long time that representations — whether they be probabilities, frequencies or odds — have an impact on the human mind. There are very few theories about how this works.

I'll give you a couple examples relating to medical care. In the U.S. and many European countries, women who are 40 years old are told to participate in mammography screening. Say that a woman takes her first mammogram and it comes out positive. She might ask the physician, "What does that mean? Do I have breast cancer? Or are my chances of having it 99%, 95%, or 90% ­ or only 50%? What do we know at this point?" I have put the same question to radiologists who have done mammography screening for 20 or 25 years, including chiefs of departments. A third said they would tell this woman that, given a positive mammogram, her chance of having breast cancer is 90%.

However, what happens when they get additional relevant information? The chance that a woman in this age group has cancer is roughly1%. If a woman has breast cancer, the probability that she will test positive on a mammogram is 90%. If a woman does not have breast cancer the probability that she nevertheless tests positive is some 9%. In technical terms you have a base rate of 1%, a sensitivity or hit rate of 90%, and a false positive rate of about 9%. So, how do you answer this woman who's just tested positive for cancer? As I just said, about a third of the physicians thinks it's 90%, another third thinks the answer should be something between 50% and 80%, and another third thinks the answer is between 1% and 10%. Again, these are professionals with many years of experience. It's hard to imagine a larger variability in physicians' judgments — between 1% and 90% — and if patients knew about this variability, they would not be very happy. This situation is typical of what we know from laboratory experiments: namely, that when people encounter probabilities — which are technically conditional probabilities — their minds are clouded when they try to make an inference.

What we do is to teach these physicians tools that change the representation so that they can see through the problem. We don't send them to a statistics course, since they wouldn't have the time to go in the first place, and most likely they wouldn't understand it because they would be taught probabilities again. But how can we help them to understand the situation?

Let's change the representation using natural frequencies, as if the physician would have observed these patients him- or herself. One can communicate the same information in the following, much more simple way. Think about 100 women. One of them has breast cancer. This was the 1%. She likely tests positive; that's the 90%. Out of 99 who do not have breast cancer another 9 or 10 will test positive. So we have one in 9 or 10 who tests positive. How many of them actually has cancer? One out of ten. That's not 90%, that's not 50%, that's one out of ten.

Here we have a method that enables physicians to see through the fog just by changing the representation, turning their innumeracy into insight. Many of these physicians have carried this innumeracy around for decades and have tried to hide it. When we interview them, they obviously admit it, saying, "I don't know what to do with these numbers. I always confuse these things." Here we have a chance to use very simple tools to help those patients and physicians to understand what the risks are and which enable them to have a reasonable reaction to what to do. If you take the perspective of a patient — that this test means that there is a 90% chance you have cancer — you can imagine what emotions set in, emotions that do not help her to reason the right way. But informing her that only one out of ten women who tests positive actually has cancer would help her to have a cooler attitude and to make more reasonable decisions.

Prostate cancer is another disease for which we have good data. In the U.S. and European countries doctors advise men aged 40 to 50 to take a PSA test. This is a prostate cancer test that is very simple, requiring just a bit of blood, and so many people do it. The interesting thing is that most of the men I've talked to have no idea of the benefits and costs of this test. It's an example of decision-making based on trusting your doctor or on rumors. But interestingly, if you read about the test on the Internet in independent medical societies like Cochran.com, or read the reports of various physicians' agencies who give recommendations for screening, then you find out that the benefits and costs of prostate cancer screening are roughly the following: Mortality reduction is the usual goal of medical testing, yet there's no proof that prostate cancer screening reduces mortality. On the other hand there is proof that, if we distinguish between people who do not have prostate cancer and those who do, there is a good likelihood that it will do harm. The test produces a number of false positives. If you do it often enough there's a good chance of getting a high level on the test, a so-called positive result, even though you don't have cancer. It's like a car alarm that goes off all the time.

For those who actually have cancer, surgery can result in incontinence or impotence, which are serious consequences that stay with you for the rest of your life. For that reason, the U.S. Preventive Services task force says very clearly in a report that men should not participate in PSA screening because there is no proof in mortality reduction, only likely harm.

It is very puzzling that in a country where a 12-year-old knows baseball statistics, adults don't know the simplest statistics about tests, diseases, and the consequences that may cause them serious damage. Why is this? One reason, of course, is that the cost benefit computations for doctors are not the same as for patients. One cannot simply accuse doctors of knowing things or not caring about patients, but a doctor has to face the possibility that if he or she doesn't advise someone to participate in the PSA test and that person gets prostate cancer, then the patient may turn up at his doorstep with a lawyer. The second thing is that doctors are members of a community with professional pride, and for many of them not detecting a cancer is something they don't want to have on their records. Third, there are groups of doctors who have very clear financial incentives to perform certain procedures. A good doctor would explain this to a patient but leave the decision to the patient. Many patients don't see this situation in which doctors find themselves, but most doctors will recommend the test.

But who knows? Autopsy studies show that one out of three or one out of four men who die a natural death have prostate cancer. Everyone has some cancer cells. If everyone underwent PSA testing and cancer were detected, then these poor guys would spend the last years or decades of their lives living with severe bodily injury. These are very simple facts.

Thus, dealing with probabilities also relates to the issue of understanding the psychology of how we make rational decisions. According to decision theory, rational decisions are made according to the so-called expected utility calculus, or some variant thereof. In economics, for instance, the idea is that if you make an important decision — whom to marry or what stock to buy, for example — you look at all the consequences of each decision, attach a probability to these consequences, attach a value, and sum them up, choosing the optimal, highest expected value or expected utility. This theory, which is very widespread, maintains that people behave in this way when they make their decisions. The problem is that we know from experimental studies that people don't behave this way.

There is a nice story that illustrates the whole conflict: A famous decision theorist who once taught at Columbia got an offer from a rival university and was struggling with the question of whether to stay where he was or accept the new post. His friend, a philosopher, took him aside and said, "What's the problem? Just do what you write about and what you teach your students. Maximize your expected utility." The decision theorist, exasperated, responded, "Come on, get serious!"

Decisions can often be modeled by what I call fast and frugal heuristics. Sometimes they're faster, and sometimes they're more frugal. Deciding which of two jobs to take, for instance, may involve consequences that are incommensurate from the point of view of the person making the decision. The new job may give you more money and prestige, but it might leave your children in tears, since they don't want to move for fear that they would lose their friends. Some economists may believe that you can bring everything in the same common denominator, but others can't do this. A person could end up making a decision for one dominant reason.

We make decisions based on a bounded rationality, not the unbounded rationality of the decision maker modeled after an omniscient god. But bounded rationality is also not of one kind. There is a group of economists, for example, who look at the bounds or constraints in the environment that affect how a decision is made. This study is called "optimization under constraints," and many Nobel prizes have been awarded in this area. Using the concept of bounded rationality from this perspective you realize that an organism has neither unlimited resources nor unlimited time. So one asks, given these constraints what's the optimal solution?

There's a second group, which doesn't look at bounds in the environment but at bounds in the mind. These include many psychologists and behavioral economists who find that people often take in only limited information, and sometimes make decisions based on just one or two criteria. But these colleagues don't analyze the environmental influences on the task. They think that for a priori reasons people make bad choices because of a bias, an error, or a fallacy. They look at constraints in the mind.

Neither of these concepts takes advantage of what the human mind takes advantage of: that the bounds in the mind are not unrelated to the bounds in the environment. The bounds get together. Herbert Simon developed a wonderful analogy based on a pair of scissors, where one blade is cognition and the other is the structure of the environment, or the task. You only understand how human behavior functions if you look at both sides.

Evolutionary thinking gives us a useful framework for asking some interesting questions that are not often posed. For instance, when I look at a certain heuristic — like when people make a decision based on one good reason while ignoring all others — I must ask in what environmental structures that heuristic works, and where it does not work. This is a question about ecological rationale, about the adaptation of heuristics, and it is very different from what we see in the study of cognitive illusions in social psychology and of judgment decision-making, where any kind of behavior that suggests that people ignore information, or just use one or two pieces of information, is coded as a bias. That approach is non-ecological; that is, it doesn't relate the mind to its environment.

An important future direction in cognitive science is to understand that human minds are embedded in an environment. This is not the usual way that many psychologists, and of course many economists, think about it. There are many psychological theories about what's in the mind, and there may be all kinds of computations and motives in the mind, but there's very little ecological thinking about what certain cognitive strategies or emotions do for us, and what problems they solve. One of the visions I have is to understand not only how cognitive heuristics work, and in which environments it is smart to use them, but also what role emotions play in our judgment. We have gone through a kind of liberation in the last years. There are many books, by Antonio Damasio and others, that make a general claim that emotions are important for cognitive functions, and are not just there to interrupt, distract, or mislead you. Actually, emotions can do certain things that cognitive strategies can't do, but we have very little understanding of exactly how that works.

To give a simple example, imagine Homo economicus in mate search, trying to find a woman to marry. According to standard theory Homo economicus would have to find out all the possible options and all the possible consequences of marrying each one of them. He would also look at the probabilities of various consequences of marrying each of them — whether the woman would still talk to him after they're married, whether she'd take care of their children, whatever is important to him — and the utilities of each of these. Homo economicus would have to do tons of research to avoid just coming up with subjective probabilities, and after many years of research he'd probably find out that his final choice had already married another person who didn't do these computations, and actually just fell in love with her.

Herbert Simon's idea of satisfying solves that problem. A satisfier, searching for a mate, would have an aspiration level. Once this aspiration is met, as long as it is not too high, he will find the partner and the problem is solved. But satisfying is also a purely cognitive mechanism. After you make your choice you might see someone come around the corner who looks better, and there's nothing to prevent you from dropping your wife or your husband and going off with the next one.

Here we see one function of emotions. Love, whether it be romantic love or love for our children, helps most of us to create a commitment necessary to make us stay with and take care of our spouses and families. Emotions can perform functions that are similar to those that cognitive building blocks of heuristics perform. Disgust, for example, keeps you from eating lots of things and makes food choice much simpler, and other emotions do similar things. Still, we have very little understanding of how decision theory links with the theory of emotion, and how we develop a good vocabulary of building blocks necessary for making decisions. This is one direction in which it is important to investigate in the future.

Another simple example of how heuristics are useful can be seen in the following thought experiment: Assume you want to study how players catch balls that come in from a high angle — like in baseball, cricket, or soccer — because you want to build a robot that can catch them. The traditional approach, which is much like optimization under constraints, would be to try to give your robot the complete representation of its environment and the most expensive computation machinery you can afford. You might feed your robot a family of parabolas because thrown balls have parabolic trajectories, with the idea that the robot needs to find the right parabola in order to catch the ball. Or you feed him measurement instruments that can measure the initial distance, the initial velocity, and the initial angle the ball was thrown or kicked. You're still not done because in the real world balls are not flying parabolas, so you need instruments that can measure the direction and the speed of the wind at each point of the ball's flight to calculate its final trajectory and its spin. It's a very hard problem, but this is one way to look at it.

A very different way to approach this is to ask if there is a heuristic that a player could actually use to solve this problem without making any of these calculations, or only very few. Experimental studies have shown that actual players use a quite simple heuristic that I call the gaze heuristic. When a ball comes in high, a player starts running and fixates his eyes on the ball. The heuristic is that you adjust your running speed so that the angle of the gaze, the angle between the eye and the ball, remains constant. If you make the angle constant the ball will come down to you and it will catch you, or at least it will hit you. This heuristic only pays attention to one variable, the angle of gaze, and can ignore all the other causal, relevant variables and achieve the same goal much faster, more frugally, and with less chances for error.

This illustrates that we can do the science of calculation by looking always at what the mind does — the heuristics and the structures of environments — and how minds change the structures of environments. In this case the relationship between the ball and one's self is turned into a simple linear relationship on which the player acts. This is an example of a smart heuristic, which is part of the adaptive tool box that has evolved in humans. Many of these heuristics are also present in animals. For instance, a recent study showed that when dogs catch frisbees they use the same gaze heuristic.

Heuristics are also useful in very important practical ways relating to economics. To illustrate I'll give you a short story about our research on a heuristic concerning the stock market. One very smart and simple heuristic is called the recognition heuristic. Here is a demonstration: Which of the following two cities has more inhabitants — Hanover or Bielefeld? I pick these two German cities assuming that you don't know very much about Germany. Most people will think it's Hanover because they have never heard of Bielefeld, and they're right. However, if I pose the same question to Germans, they are insecure and don't know which to choose. They've heard of both of them and try to recall information. The same thing can be done in reverse. We have done studies with Daniel Gray Goldstein in which we ask Americans which city has more inhabitants — San Diego or San Antonio? About two-thirds of my former undergraduates at the University of Chicago got the right answer: San Diego. Then we asked German students — who know much less about San Diego and many of whom had never even heard of San Antonio — the same question. What proportion of the German students do you think got the answer right? In our study, a hundred percent. They hadn't heard of San Antonio, so they picked San Diego. This is an interesting case of a smart heuristic, where people with less knowledge can do better than people with more. The reason this works is because in the real world there is a correlation between name recognition and things like populations. You have heard of a city because there is something happening there. It's not an indicator of certainty, but it's a good stimulus.

In my group at the Max Planck Institute for Human Development I work alongside a spectrum of researchers, several of whom are economists, who work on the same topics but ask a different kind of question. They say, "That's all fine that you can demonstrate that you can get away with less knowledge, but can the recognition heuristic make money?" In order to answer this question we did a large study with the American and German stock markets, involving both lay people and students of business and finance in both countries. We went to downtown Chicago and interviewed several hundred pedestrians. We gave them a list of stocks and asked them one question: Have you ever heard of this stock? Yes or no? Then we took the ten percent of the stocks that had the highest recognition, which were all stocks in the Standard & Poor's Index, put them in the portfolio and let them go for half a year. As a control, we did the same thing with the same American pedestrians with German stocks. In this case they had heard of very few of them. As a third control we had German pedestrians in downtown Munich perform the same recognition ratings with German and American stocks. The question in this experiment is not how much money the portfolio makes, but whether it makes more money than some standards, of which we had four. One consisted of randomly picked stocks, which is a tough standard. A second one contained the least-recognized stocks, which is according to the theory an important standard, and shouldn't do as well. In the third we had blue chip funds, like Fidelity II. And in the last we had the market — the Dow and its German equivalent. We let this run for six months, and after six months the portfolios containing the highest recognized stocks by ordinary people outperformed the randomly picked stocks, the low recognition stocks, and in six out of eight cases the market and the mutual funds.

Although this was an interesting study, one should of course be cautious, because unlike in other experimental and real world studies, we have a variable and very random environment. But what this study at least showed is that the recognition of ordinary citizens can actually beat out the performance of the market and other important criteria. The empirical evidence, of course — the background — is consumer behavior. In many situations when people in a supermarket choose between products they go with the item with name recognition. Advertising by companies like Benetton exploits the use of the recognition heuristic. They give us no information about the product, but only increase name recognition. It has been a very successful strategy for the firm.

Of course the reaction to this study, which is published in our book Simple Heuristics that Make Us Work, has split the experts in two camps. One group said this can't be true, that it's all wrong, or it could never be replicated. Among them were financial advisers, who certainly didn't like the results. Another group of people said, "This is no surprise. I knew it all along. The stock market's all rumor, recognition, and psychology." Meanwhile, we have replicated these studies several times and found the same advantage of recognition — in bull and bear market — and also found that recognition among those who knew less did best of all in our studies.

I would like to share these ideas with many others, to use psychological research, and to use what we know about how to facilitate people's understanding of uncertainties to help to promote this old dream about getting an educated citizenship that can deal with uncertainties, rather than denying their existence. Understanding the mind as a tool that tries to live in an uncertain world is an important challenge.

The Edge Annual Science Dinner [2.27.03]

[click here for slide show — or — click on individual thumbnail photos for full-size image]

John Brockman
Kelly Bovino

Sergei Brin, Google; Ronna Tanenbaum. Alexa; Mackenzie & Jeff Bezos, Amazon
Sarah Kellen
Max Brockman

On February 27, 2003, Edge Foundation, Inc. celebrated the 6th anniversary of Edge at the "Edge Annual Science Dinner" (formerly known as "The Billionaires' Dinner") in at Cibo's Restaurant, in Monterey, California.


Marvin Minsky
Sunny Bates

Chee Pearlman, N.Y. Times
Rodney Brooks

Among the world-class scientists attending the dinner (who are also Edge contributors) were biologist Jared Diamond (Guns, Germs, And Steel); psychologist Steven Pinker (The Blank Slate); cognitive scientist Daniel C. Dennett (Darwin's Dangerous Idea); Computer scientists Marvin Minsky (Society Of Mind), Rodney Brooks (Flesh And Machines), and W. Daniel Hillis (The Pattern On The Stone); Physicists Freeman Dyson (Disturbing The Universe), and Lee Smolin (Life Of The Cosmos), and atmospheric scientist Stephen Schneider (Laboratory Earth).

Katie Hafner, NY Times
George Dyson
Terry Root
Stephen Schneider

Frank Sulloway
Katie Hafner, NY Times

Dina Graser
Lee Smolin
Patti Hillis
Stewart Brand, Long Now
Mitch Kapor

Megan Smith, Planet Out
Dean Kamen, Deka
Linda Stone

Eric Schmidt, Google
Kelly Bovino
Jaron Lanier
Marnie Morris, Anamatrix
Jeff Bezos, Amazon
Steffi Czerny, Burda Media

Marco Zanini

Gloria Rudisch Minsky

Cyndi Stivers, Time Out-NY
Dan Adler
Chris Anderson, Wired
Tom Reilly, Planet Out
Ilenvanil Subbiah

Paul MacCready
Larry Page, Google
Doug Rowan

Jean Pigozzi
Olga of Greece
Don Norman
Maryam Mohit, Amazon
Ramana Rao
Steve Riggio, B&N

Steve Petranek, Discover

Frieda Kapor
John Markoff, NY Times
Susan Dawson, Sapling Foundation
Larry Page, Google


Adam Bly, Seed
Jane Metcalfe, Forca

Ryan Phelan, All Species
Stewart Brand, Long Now
Kim Polese, Marimba

Nick Wingfield, WSJournal
Dana Ardi, J.P. Morgan

Ronna Tannenbaum, Alexa
David Bank, WSJournal
Megan Smith, Planet Out

Eric Schmidt, Google
J.P. Schmetz, Burda Digital
Max Brockman
Sarah Kellen
Matt Jacobson,, Quicksilver
Danny Kwock, Quicksilver
Beth Ferren, Fortune
Chris Taylor, Time

Jeff Bezos, Amazon
Marney Morris, Anamatrix

Sergey Brin, Google
Brad Stone, Newsweek
Susan Dawson, Sapling Foundation
Sergey Brin, Google
Chris Anerdson, TED

Jean Pigozzi
Olga of Greece
Louis Rossetto, Forca

Also present: Pam Alexander, Alexander, Ogilvy; Garry Betty, Earthlink; Paul Bricault, William Morris; John Doerr, Kleiner Perkins; Daniel Greenfield, Earthlink; Alan Kahn, Barnes & Noble; Vinod Khosla, Kleiner Perkins; Dennis Kneale, Forbes; Walter Mossberg, Wall Street Journal; Yossi Vardi, ICQ.


Nicholas of Cusa, who lived from 1401 to 1464, was one of the first who tried to break out of the geocentric, anthropocentric, finite, and hierarchically sequenced world of antiquity, a world bounded by the walls of the heavenly spheres. He glimpsed the dizzying potential of space and entertained a very different universe: open, unbounded, without natural subordination of any one part to any other, filled with identical laws and with essentially interchangeable components. Technically, his step is called the "infinitization of the cosmos," an idea so new then that it was ignored by Nicholas of Cusa's contemporary, Copernicus, who thought the world was contained within a sphere of about 20,000 earth radii.

Gerald Holton

What Lies Behind our Desire to Venture into Space?

The leap into Space witnessed in our time, with its triumphs and tragedies, will remain part of the permanent memory of mankind, alongside the historic memory of the great journeys of adventure and discovery that formerly found expression in epic form. People in the distant future may, in their own way and using their own media, be describing our attempts to transcend our physical dependence on the earth somewhat as we still are singing Homer's song to relive the voyage of Odysseus beyond the boundaries of the ancient world.

I venture two brief speculations. The first is that, in retrospect, the exploration of the solar system, and beyond, by means of earth-launched physical instruments, was prepared for by a series of equally daring, mental launchings into space. Science and space have in fact been Siamese twins from the start: Space has been the foremost laboratory of the scientific imagination—from the pre-Socratics who toyed with the question of the limits of space, to Aristotle and his followers for whom the cosmos was not only finite but relatively small, to Kepler who could envisage something like the law of conservation of momentum by thinking about mutually attracting and colliding bodies in far-distant space, to Galileo for whom space was not yet Euclidean but warped, and on to Newton and the modern period.

In a sense, the space age really started not with Sputnik I, but with those early explorers of the mind's own space, who, launching their imaginative conceptions, prepared the ground for launching our hardware. There are several candidates for a designation of the father of the space age. My own preference is a philosopher, mathematician, cosmologist, and cardinal of the church, Nicholas of Cusa. (Appropriately, a crater on the moon has been named after him.) A good description of his work is in Alexander Koyré's great book, From the Closed World to the Infinite Universe. Nicholas of Cusa, who lived from 1401 to 1464, was one of the first who tried to break out of the geocentric, anthropocentric, finite, and hierarchically sequenced world of antiquity, a world bounded by the walls of the heavenly spheres. He glimpsed the dizzying potential of space and entertained a very different universe: open, unbounded, without natural subordination of any one part to any other, filled with identical laws and with essentially interchangeable components. Technically, his step is called the "infinitization of the cosmos," an idea so new then that it was ignored by Nicholas of Cusa's contemporary, Copernicus, who thought the world was contained within a sphere of about 20,000 earth radii.

But Nicholas of Cusa saw the consequences of his vision: In an immeasurable universe, where there is no limiting point or center, all motion is relative, and the earth and all other bodies may be considered in motion. The earth then joins the ranks of the noble stars. He even imagined that the stars may also be endowed with life forms. Most of his readers recoiled in horror and vertigo, except Giordano Bruno, who embraced these ideas, and who, by being burned at the stake in 1600 for such heresies, became (so to speak) the first space casualty. Thereafter, however, Nicholas' ideas became more and more influential.

Nicolas of Cusa was a prominent person, but we know all too little about him. Though we happily have his book with the modest title On Learned Ignorance, which I like to think started the space age some 560 years ago, the reputedly most adventurous of his scientific-philosophical writings have been lost to history.

My first speculation, looking back, brings me to the second, looking forward. Who, in the long run, will tell our story? Who will be the future Homers to sing of our time, and where will they get their information? Will the future students of our attempts at exploring have reliable information, more reliable than we have about our predecessors? Who is now concerned with preparing accounts that can withstand the scrutiny of the ages to come? Who is saving the database, the less obvious documentation of successes and failures? Who is conducting the oral history of the pioneers? Are there interviewers able to handle the science, the technology, and the industrial and administrative components of modern space achievements?

There are a few who can, historians of science and technology. On the members of that fairly young profession we shall have to rely for the preservation of the record, and for the assessment and authentication of what has been happening during this early, heroic period. We are lucky that such people, in the United States, in the U.K, and in Europe, dedicate their lives to such scholarship. But altogether, the number of these professionals are few, and their support and the infrastructure of their professional societies are now under severe constraints. Let us not be chided by future historians, for neglecting our opportunities of preserving the full record, as we ourselves might blame Nicholas Cusa's contemporaries for not having preserved more of his pioneering thoughts.

GERALD HOLTON is Mallinckrodt Professor of Physics and Professor of the History of Science, Emeritus, at Harvard University. His recent books incude Thematic Origins of Scientific Thought; and Science and Anti-Science.


There's a simple story that sums up the perils of global terrorism. "Once there were two people sitting in a rowboat. One suddenly started making a hole on his side of the boat. The other screamed. The first countered and said, 'What do you care what I do on my side of the boat?'"

 Todd Siler

In your search for a new Science Advisor, I strongly recommend that you select an individual who has as much common sense as he or she has accomplishments in the sciences. Equally important, this open minded advisor needs to approach our world of interrelated problems with a systems view of things, which is something compartmentalized thinkers struggle with conceptually. This systems view is essential for effectively dealing with the web of gnarly problems that entangle nations and strain international relations.

In reviewing the list of challenging scientific issues that need your immediate attention, few strike me as being as important as fighting the war on terror. But fighting it to win in both the short and long run. As the world wrestles with how to best respond to terrorism in the wake of September 11—and as our nation grapples with the lethal threats of tyrants and their irrational actions—your advisory board needs to be as agile and open to the possibilities of a "chance discovery" as an inventor on the verge of a major breakthrough.

Sparking breakthrough thinking and accelerating innovation are two of my specialties and passions. If I was fortunate enough to serve as a member of your ad hoc committee on terrorism, I would suggest taking the following course of action:

I'd help organize a maverick group of professional thinkers (scientists, engineers, artists, educators, scholars, policy-makers, and polymaths), and invite them to delve into a pool of obvious and deep questions concerning national security.

I'd compare this exploratory work to the adventurous endeavors undertaken by the American military strategist and futurist, Herman Kahn, founder of the Hudson Institute think-tank and author of On Thermonuclear War. Ideally, I would hope to see the creative energies invested here parallel that of other intensely focused science-technology-civil society-oriented projects in the past; imagine a sort of Manhattan Project for Peaceful Solutions or a small scale Pugwash Conference (without any formal conference which comes with a certain structure that can inhibit the free exchange of ideas). Our group would scope out a long-term strategic vision for securing our nation and safeguarding the world from the projected charges and potential damage of "rogue elephants."

Note that we would engage in this collaborative envisioning activity using some unconventional, yet proven, techniques of communication that involve symbolic modeling. One outcome of this work would be a set of tactical, implementation plans. These practical plans could then be evaluated and contrasted with the research-based recommendations of groups such as the Rand Corporation, among other solution providers.

They could also be run through the mill of Strengths, Weaknesses, Opportunities, Threats (SWOT) analysis, a business practice I'm quite familiar with having facilitated many strategic planning sessions for executive officers of Fortune 500 Companies.

The types of open-ended questions this ad hoc group might consider responding to could include the following:

• How do we foresee the forces of terrorism growing during this decade? How will this growth impact our collective future? Describe these forces.

• What are some non-military solutions for stemming further acts of aggression against our principles and practices of democracy?

• How will this constant presence of terrorism profoundly affect the American way of life as well as our dreams for improving the state of the world?

• How will terrorism affect our advanced warfare programs and defense policies?

• Is there any way to avoid the inevitable build up of weapons of mass destruction in defense of by attacking civil liberties and basic human rights for all?

• How can our world community do a better job of policing renegade groups of people and organizations whose raison d'etre seems to be to spread anarchy and other forms of social unrest?

• Other than U.N. disarmament resolutions, what additional agreements would we need to have securely in place in order to begin to abolish of chemical and biological warfare (CBW) programs? Specifically, how will the advancement of these programs help our prospects for a lasting peace?

• What benefits will the next generation of nuclear, chemical and biological weapons "offer" us which an improvement in human communication cannot?

These are merely a handful of basic questions that come to mind at the moment. Any one of them could be explored by this group of thinkers using the tools of science and common sense to solve this gravest of problems: fighting a war on terror that doesn't perpetuate the cycle of violence but rather prevents it by fostering a new understanding. The main task of this group would be to find more ingenious ways of dismantling this Gordian Knot of political, ideological and religious beliefs other than reaching into that old Pandora's Box and taking out another weapon to whack away at our worst primal fears.

Clearly we have much more scientific work to do to better understand the nature of fear and terror, and to recognize the patterns of ineffective responses to these phenomena. Whenever our brute fears overpower our rationality trouble abounds.

Finally, we need to explore our deepest, most ambiguous questions about the roots of terrorism that have as much to do with science as they do with philosophy and religion. Naturally, your new Science Advisor needs to handle this reality with the utmost sensitivity. And the advisory board needs to value the fact that there's always more than one viable solution to any given problem, when viewed from many perspectives. Without this broader and deeper exploration, our world may remain pinned and pained by the headlock we're in.

There's a simple story that sums up the perils of global terrorism. "Once there were two people sitting in a rowboat. One suddenly started making a hole on his side of the boat. The other screamed. The first countered and said, 'What do you care what I do on my side of the boat?'" I thank you for caring about the hole in our boat. Now you need to get the rest of the world on board about caring too.

Todd Siler
Founder & Director, Psi-Phi Communications, LLC.
Former advisory board member of the Council on Art, Science, Technology at M.I.T.
Author of Think Like A Genius and Breaking the Mind Barrier

John Brockman, Editor and Publisher
Russell Weinberger, Associate Publisher
contact: [email protected]
Copyright © 2003 by
Edge Foundation, Inc
All Rights Reserved.