Question Center

Edge 319 — May 27, 2010
11,500 words


A Talk with Emanuel Derman



Comments by Rodney Brooks, PZ Myers, Richard Dawkins, George Church, Nassim N. Taleb, Daniel C. Dennett, Dimitar Sasselov, Antony Hegarty


NEWS.CHINA.COM.CN, the New York Times, the Observer


A Conversation with Emanuel Derman

Watching that interrogation of the bankers at the Senate hearings, I had the feeling that this is the way karma works in the universe. Everybody is going to do something not quite right as they act out their destiny mechanically, doing what they unthinkingly believe they have to do. The Wall Street people are going to reflexively overshoot and be too greedy. The Senate people are going to reflexively grandstand and be too uninformed and try to rein them in. There isn't going to be an elegant solution to any of this.

EMANUEL DERMAN, a former managing director and head of the Quantitative Strategies Group at Goldman Sachs & Co, is a professor in Columbia University's Industrial Engineering and Operations Research Department, as well as a partner at Prisma Capital Partners. He is the author of My Life As A Quant.


[...continued below]

"I feel sure of only one conclusion. The ability to design and create new forms of life marks a turning-point in the history of our species and our planet." — Freeman Dyson


Rodney Brooks, PZ Myers, Richard Dawkins, George Church, Nassim N. Taleb, Daniel C. Dennett, Dimitar Sasselov, Antony Hegarty

Singer-Songwriter, Composer, and Visual Artist; Lead Singer, Antony and the Johnsons.

Seven generation sustainability is an ecological concept that urges the current generation of humans to live sustainably and work for the benefit of the seventh generation into the future.

"In every deliberation, we must consider the impact on the seventh generation..." — Great Law of the Iroquois

The Seventh Generation originated with the Iroquois when they thought it was appropriate to think seven generations ahead (a couple hundred years into the future) and decide whether the decisions they make today would benefit their children seven generations into the future. (Lyons O, An Iroquois Perspective.)

Professor of Astronomy, Harvard University; Director, Harvard Origins of Life Initiative

Venter's experiment is a tour-de-force with many implications. The DNA of the synthetic cell contains segments — watermarks, one of which bears the words of the famous physicist Richard Feynman "What I cannot build, I cannot understand".

While astronomers beg to disagree — they understand our Sun very well, the big news in M. mycoides JCVI-syn1.0 is that we can build it. This act re-defines life as we know it, and tells us something about the future of a universe that took 13.7 billion years to build its own blocks and tools for life. That could be our own future.

To find out we must look at the stars — is there life on other planets? To succeed in that search we must understand life, and to understand life, we must build it. The first steps have been taken.

Philosopher; University Professor, Co-Director, Center for Cognitive Studies, Tufts University; Breaking the Spell

The achievement of Craig Venter and his team is certainly a major milestone in technology, and his forecast of the stupendous benefits that may be reaped is, if anything, understated. Now we need to ask how this new technology should be regulated. There is no doubt at all that self-replicating bacteria (and other microbes) with artificial genomes could do more harm than good if they escaped our control. They will not just replicate but evolve, mutating swiftly unless we take special steps in advance to prevent this from happening, and even then there will be a risk — not large, but not ignorable — of  seeing our preventive efforts, whatever they are, being undone by mutation.  Evolution is as unrelenting as gravity, an omnipresent prevailing wind that unties the knots, unlocks the doors, seeking out every escape route as assiduously as the inmates in a prison.

At first glance, the problems seem straightforward and not insuperable. We need to apply the lessons learned in other novel technologies. Nuclear reactors are equipped with "fail safe" systems of considerable ingenuity and reliability (nothing is perfect). Like air brakes (in which the default position is ON, the pressure being provided by powerful springs, held in check by air pressure), the default position of the control rods in nuclear reactors is IN, maintained by gravity unless held OUT by positive forces.  We should equip all artificial life forms with similarly designed default DIE WITHOUT OFFSPRING mechanisms held in check by some positive contribution we can swiftly remove when we need to put the brakes on.  If artificially designed life forms can be kept exquisitely vulnerable, doomed to immediate extinction unless they get their supply of X, and we control the supply of X, we can keep them on a short leash (and if we, their controllers, get distracted or disabled in any way, they die). 

This is just one obvious step we must take, and it is probably not all that hard to achieve. Unlike the disease organisms and viruses that are proving so adept at evading our efforts to suppress them biochemically, laboratory-created life forms will not be cryptic but close to transparent: we will know a lot about them from the inside out, and all the troubles we have overcome in learning how to keep them alive will give us lots of insight into just what they need to stay alive.      

But of course such a "fail safe" system is not itself foolproof. We will want to have further provisions in force, and probably, as with nuclear materials, the main problem confronting us will be the possible roles of deliberate human sabotage, or just irresponsible human curiosity. With a technology of such power, the temptation to explore its powers informally will be ubiquitous. And here the parallel with the safeguards of nuclear technology is misleading; a more ominous parallel is with cyber-technology. Fortunately for us all, enriching fissionable material is still, more than sixty years after it was first done, a very expensive, high-tech process, not something a hobbyist can do clandestinely in his basement. Devising state-of-the-art cyberattack weapons, in contrast, can be done by smart high school kids in their bedrooms, at almost no cost. The result is that we are shockingly vulnerable to anyone who sets out to develop a large-scale cyberattack. The arms race favors offense over defense by a huge margin: it is orders of magnitude cheaper and easier to develop cyberoffense than to defend against it.

Once the techniques honed by Venter and his team become widely known, will it be utterly beyond the capabilities and budgets of, say, well-trained biology majors to develop their own artificial life forms? That is not at all clear. What good will it do to have international agreements about the obligations of laboratories to equip their creations with default-apoptosis machinery if there are thousands of free-lancers engaging in bio-hacking? The price we will pay for this huge amplification of our technological prowess is probably an equal and opposite vulnerability. Welcome to the fast lane, humanity.

Distinguished Professor of Risk Engineering, NYU-Polytechnic; Author,
The Black Swan

If I understand this well, to the creationists, this should be an insult to God; but, further, to the evolutionist, this is certainly an insult to evolution. And to the risk manager/probabilist, like myself & my peers, this is an insult to human Prudence, the beginning of the mother-of-all exposure to Black Swans. Let me explain.

Evolution (in complex systems) proceeds by undirected, convex bricolage or tinkering, inherently robust, i.e., with the achievement of potential stochastic gains thanks to continuous and repetitive small, near-harmless mistakes. What men have done with top-down, command-and-control science has been exactly the reverse: concave interventions, i.e., the achievement of small certain gains through exposure to massive stochastic mistakes (coming from the natural incompleteness in our understanding of systems). Our record in understanding risks in complex systems (biology, economics, climate) has been pitiful, marred with retrospective distortions (we only understand the risks after the damage takes place), and there is nothing to convince me that we have gotten better at risk management. In this particular case, because of the scalability of the errors, you are exposed to the wildest possible form of informational uncertainty (even more than markets), producing tail risks of unheard proportions.

I have an immense respect for Craig Venter, whom I consider one of the smartest men who ever breathed, but, giving fallible humans such powers is similar to giving a small child a bunch of explosives.

Professor, Harvard University, Director, Personal Genome Project

Knowing little about genome engineering, shall I meander into more ancient precedents?

What do we remember, the first illuminated manuscript or the Gutenberg printing press? The first car or the first affordable car (Ford's model-T)? The first computer or the first popular personal computer (from Woz & Jobs)? The first 121 Edison power stations delivering direct current in 1887 or the AC electric grid from Nikola Tesla?

Do we prefer the first DNA model, Pauling's triple helix, or Watson and Crick's double helix? The first atom bomb or the last? The first authors on PCR, Kleppe in 1971, Saiki in 1985, or the innovator who brought it to practice — Kary Mullis? The first human genome in 2004 for $3 billion or the first affordable ($1500) personal genome sequence in 2009?

Returning to the topic of genome engineering, are we looking for the first construction of a tiny genome (for $40 million) or a larger genome already cranking out green chemistry? Do we applaud the first rationale for engineering whole genomes ("Because it's there" — a la George Mallory, who died climbing Everest) — or seek a more compelling and nuanced articulation — "to make virus-resistant production strains, engineering standards, safety features, new bio-polymers, mirror-chemistries, and bring the extinct back to life"?

Evolutionary Biologist; Emeritus Professor of the Public Understanding of Science, Oxford; Author, The Greatest Show on Earth

Craig Venter's Brave New World

Craig Venter's artificial bacterium debuted almost simultaneously with Svante Pääbo's publication of the greater part of the Neanderthal genome. Put the two together and ask whether we could — or should — recreate a living, breathing Neanderthal. Of the technologies that would be required, the Venter team has proofed an important component. Dolly was cloned from an entire diploid genome of an adult sheep's udder cell, dropped into an enucleated ovum. The Venter equivalent of Ian Wilmut's achievement would be to go to the library (or in this case the Internet), take down the book labelled 'Sheep Genome Project' (or rather download the data files), and synthesize a complete set of sheep chromosomes from four bottles of chemicals labelled A, T, C and G. The synthetic genome would then be dropped into an enucleated sheep cell, as per Dolly.

While they were about it, the team might improve on the genome of any one donor sheep by substituting, say, wool-growing genes from The Champion Merino Genome Project and hardiness genes from The Soay Genome Project. Maybe some code from the Goat Genome Project to broaden the creature's preferred diet, or from the Chamois Genome Project to give it a better head for heights? Perhaps even a Cut and Paste job from the Otter Genome Project, to give the über-sheep a taste for water sports.

We'd need to do something similar to re-grow a Neanderthal from Svante Pääbo's data. Or, later, a computed intermediate between the chimpanzee and human genomes to re-create the 6-million-year-old common ancestor.  And then, might a born-again Lucy split the difference again?

The technical difficulties would be formidable, but present progress suggests that they will be overcome. I leave the speciesist ethical difficulties on one side, except to note that ethical thinking, too, has a way of progressing as the decades go by.  There is the harder problem that Pääbo's Neanderthal sequence is only 60 percent complete, and 100 percent may be unattainable. Presumably the residue would be coloured in from the H. sapiens genome, and that could create technical problems as well as compromise the authenticity of the clone as a 'true' Neanderthal.

But Neanderthal bones are tens of thousands of years old.  Should we disinter Charles Darwin's bones from Westminster Abbey with the same insouciance as the Roman Catholic Church is now displaying toward the remains of his contemporary, Cardinal Newman? Might a new identical twin brother of the great naturalist ride shotgun to Craig Venter's future twin, on a round-the-world DNA-harvesting voyage? Could Darwin Junior be mathematically enhanced by a few judicious splicings from the Albert Einstein Genome Project? Or get a head-start in molecular genetics by strategic borrowing from the Francis Crick Genome Project? The Jeremy Bentham Genome Project might suffer utilitarian doubts over whether the taxidermic curiosity in the Entrance Hall of University College, London still contains any of his authentic remains.

Of course no steps were taken to preserve the DNA of any of these great men. Today's equivalents don't need to be cryogenically preserved for the Craig Venters of the future.  Nothing so messy or expensive. Give or take some epigenetic mark-ups, a simple computer disk is all it takes: just miles and miles of A, T, C, G. 

And the J Craig Venter Genome Project is already on line ...

Biologist, University of Minnesota; blogger, Pharyngula

I have to address one narrow point that is being discussed in the popular press and here on Edge: is Venter's technological tour de force a threat to humanity, another atom bomb in the hands of children?


There is a threat, but this isn't it. If you want to worry, think about the teeming swarms of viruses, bacteria, fungi, and parasites that all want to eat you, that are aided (as we are defended) by the powers of natural selection — we are a delectable feast, and nature will inevitably lead to opportunistic dining. That is a far, far bigger threat to Homo sapiens, since they are the product of a few billion years of evolutionary refinement, not a brief tinkering probe into creation.

Nature's constant attempts to kill us are often neglected in these kinds of discussions as a kind of omnipresent background noise. Technology sometimes seems more dangerous because it moves fast and creates novelty at an amazing pace, but again, Venter's technology isn't the big worry. It's much easier and much cheaper to take an existing, ecologically successful bug and splice in a few new genes than to create a whole new creature from scratch…and unlike the de novo synthesis of life, that's a technology that's almost within the reach of garage-bound bio-hackers, and is definitely within the capacity of many foreign and domestic institutions. Frankenstein bacteria are harmless compared to the possibilities of hijacking E. coli or a flu virus to nefarious ends.

The promise and the long-term peril of the ability to synthesize new life is that it will lead to deeper understanding of basic biology. That, to me, is the real potential here: the ability to experimentally reduce the chemistry of life to a minimum, and use it as a reductionist platform to tease apart the poorly understood substrates of life. It's a poor strategy for building a bioweapon, but a great one for understanding how biochemistry and biology work. That is the grand hope that we believe will give humanity an edge in its ongoing struggle with a dangerous nature: that we can bring forethought and deliberate, directed opposition to our fellow organisms that bring harm to us, and assistance to those that benefit us. And we need greater knowledge to do that.

Of course more knowledge brings more power, and more possibility of catastrophe. But to worry over a development that is far less immediately dangerous than, say, site-directed mutagenesis, is to have misplaced priorities and to be basically recoiling from the progress of science. We either embrace the forward rush to greater knowledge, or we stand still and die. Alea iacta est; I look forward to decades of revolutionary new ideas and discoveries and technologies. May we have many more refinements of Venter's innovation, a flowering of novel life forms, and deeper analyses of the genome.

Panasonic Professor of Robotics (on leave); MIT Computer Science and Artificial Intelligence Lab; Author Flesh and Machines: How Robots Will Change Us

The work reported last week in Gibson et al was certainly a technical tour de force. But it was not a scientific surprise in the way that Venter's decoding of the human genome using gunshot sequencing was a surprise — that just seemed too big a job for the combinatorics not to bog the process down. Nor was it as surprising as Venter's previous work where he and his team removed 100 out of 485 protein coding genes of what was already the shortest known genome of an organism capable of independent growth, and still the new genome supported continued growth and reproduction.

Though not a scientific surprise the new work seems to have awakened the press to certain realities that all molecular biologists have believed at their very cores for decades, but the fuss from both the press and ethicists does not follow logically from what has been achieved.

As the paper's title explicitly says, the team have built a line of cells, where the ancestor genome was chemically synthesized. The ancestor cells all started out "life" as cells of a different species, naturally produced. Their DNA was replaced by a string of just over a million base pairs of synthetically produced DNA. The cells then continued to reproduce and to faithfully copy that synthesized DNA.

So is this synthetic life? Yes, and no.

It is synthetic life in that the genome is synthetic. Besides being built from over a thousand separately constructed subpieces the genome differs at 19 base pairs from the wild type. And then the researchers also substituted four watermarks, containing codes of their names and an email address, using a total of 4,658 base pairs. But the fact that the genome works as a genome is not a suprise to molecular biologists. They have long believed that life is chemistry, and that one string of connected atoms is just as good as another having the same arrangement. They have long ago discounted they idea that there is any sort specialness imparted to a molecule by its history of production. Molecules have no souls.

But the new cells are also not synthetic life in that the ancestor cell was an existing live cell. It was not built from pieces in the same way that the synthetic genome was built. That is another, perhaps harder technological challenge, but also one that there may be no imperative to try to achieve in the short term; hijacking existing cells may be all that we need to develop all sorts of new synthetic forms.

The press has both overplayed that what has been done is a surprise, and underplayed the interesting challenges that lie ahead, in that their biggest fears do not automatically follow from the current achievement.

Here are some next steps, which more than creative hard technological work, will also require a few new scientific suprises to be discovered:

• a viable synthetic genome which mixes and matches genes from many species
• a viable synthetic genome which includes genes which have been designed rather than copied from existing species
• a bacterial line where the RNAs that decode the genome are also synthetic and which use a different encoding mapping three base pairs to amino acids — a bacterial line that uses new and different amino acids for the construction of proteins
• a eukaryote line that uses a synthetic genome, and all of the above innovations

By then the ethicists will have something to worry about.

May 26, 2010

click here for Chinese language original


From: Qianjiang Evening News Comment
Jiang Jianping, compiling newspaper correspondent

Twenty American researchers at Craig Venter Institute say they synthesized a bacterial DNA and implanted it in another bacterium. After several attempts, finally they implanted synthetic DNA to the bacteria back to life, and began breeding in laboratory culture dish. The researchers said that this is the first fully artificial genetic instructions from the control cells, artificial life forms it has taken the key step.

The project leader J. Craig Venter named the "artificial life""Cynthia" (Synthia, meaning "man child"). He said: "Cynthia is a synthetic genome, is the first artificial synthesis of the cell, It's parent is a computer and it can be self-replicating."

Many scientists gave a positive assessment of this outcome, but there are some voices of concern. Some scholars have pointed out that this achievement undermines the basic beliefs about life and property, and this belief on how to treat humans, how to treat human beings in the universe are very important position.

Barack Obama also said the results of the current need to identify the right type of technology, the ethical boundaries to the extent of its damage to a minimum.

Artificial life has caused Craig Venter overnight to become the world's most controversial figure. This study suggests that through the creation of bacteria, it is possible to achieve some kind of special features, such as fossil fuels or drugs.

The trial has been concerned about the controversy may lead to academic scholars Freeman Dyson, the physicist, captured the full range of academic sentiment in this dry appraisal: "This experiment is clumsy, tedious, unoriginal. From the point of view of aesthetic and intellectual elegance, it is a bad experiment. But it is nevertheless a big discovery… the ability to design and create new forms of life marks a turning point in the history of our species and our planet."

Institute of Genetics, University of London Professor (Steve Jones) said: "It's very easy to mock Venter," Jones suggests. "When he first appeared, people just kind of sneered at him. But they stopped sneering when they saw his brilliance in realising that the genome was not a problem of chemistry but a problem of computer power. I don't think anybody can deny that that was a monumental achievement and he has been doing fantastically interesting things subsequently with marine life.

Stewart Brand (Stewart Brand) is an ecological visionary, is also the creator of "The Whole Earth Catalog". He recognizes the importance of the experiment. Over the past few years years he has gotten to know Venter through Edge, (John Brockman ) presented a "master class", an effort that really brings together the world's most ground-breaking intelligence, which presented Venter's work to an elite group of thinkers.

Brand believes that the reason Venter is different than many of his peers is because just as he was not only a distinguished biologist, but also an outstanding organization activist as well as a distinguished biologist. ...

[Continue...Chinese Original...Google translation]

May 20, 2010

Nine Billion People. One Planet.

By Andrew C. Revkin

A remarkable paper published online today by the journal Science could — emphasis on could — signal the start of an energy revolution, and more generally a manufacturing revolution. By “start” I mean this could be akin to the first twitch of a runner’s leg as she positions herself for the opening pistol shot of a marathon, not a sprint. ...

...There’s a running string of reactions to the work at the Edge Web site (which also hosts Venter), including a provocative contribution from Freeman Dyson ( no surprise there!):

This experiment, putting together a living bacterium from synthetic components, is clumsy, tedious, unoriginal. From the point of view of aesthetic and intellectual elegance, it is a bad experiment. But it is nevertheless a big discovery. It opens the way to the new world of synthetic biology. It proves that sequencing and synthesizing DNA give us all the tools we need to create new forms of life. After this, the tools will be improved and simplified, and synthesis of new creatures will become quicker and cheaper. Nobody can predict the new discoveries and surprises that the new technology will bring. I feel sure of only one conclusion. The ability to design and create new forms of life marks a turning-point in the history of our species and our planet.


Sunday, May 23, 2010

The Observer profile

A maverick, headline-grabbing biologist with an ego the size of a planet or a brilliant researcher who has succeeded in creating life? A bit of both, actually

By Tim Adams

...Stewart Brand, the ecological visionary and creator of the Whole Earth Catalog, is more persuaded. Brand has got to know Venter over the last couple of years through John Brockman's Edge initiative which brings together the world's pioneering minds. What differentiates Venter from many of his peers, Brand believes, is that he is not only a brilliant biologist, but also a brilliant organisational activist. "A lot of people can think big but Craig also has the ability to fund big: he doesn't wait for grants, he just gets on and finds a way to do these things. His great contribution will be to impress on people that we live in this vast biotic of microbes. What he has shown is that microbial ecology is now where everything is at."

Brand once suggested that "we are as gods and we might as well get good at it". That statement has gained greater urgency with climate change, he suggests. "Craig is one of those who is rising to the occasion, showing us how good we can be."...


A Conversation with Emanuel Derman

Watching that interrogation of the bankers at the Senate hearings, I had the feeling that this is the way karma works in the universe. Everybody is going to do something not quite right as they act out their destiny mechanically, doing what they unthinkingly believe they have to do. The Wall Street people are going to reflexively overshoot and be too greedy. The Senate people are going to reflexively grandstand and be too uninformed and try to rein them in. There isn't going to be an elegant solution to any of this.


When The Reality Club (the forerunner of Edge) was launched in 1980, one of it's founding members was the late Heinz Pagels, a particle physicist at Rockefeller University and president of The New York Academy of Sciences.

It was around that time that Pagels began to talk about themes that revolved around "the importance of biological organizing principles, the computational view of mathematics and physical processes, the emphasis on parallel networks, the importance of nonlinear dynamics and selective systems, the new understanding of chaos, experimental mathematics, the connectionist's ideas, neural networks, and parallel distributive processing. ..."

He understood that the computer provided "a new window on that view of nature." This led to interesting insights into how the new sciences of complexity would impact global financial markets. He had the intuition that we were on the brink of a new epistemology that would transform the scientific enterprise and the way we think about knowledge.

Pagels was having similar conversations at Rockefeller during this period with Emanuel Derman, one of his fellow particle physicists who soon after left academia for a position at Bell Labs, and from there went on to spend 17 years at Goldman Sachs where he became managing director and head of the Quantitative Strategies Group. It was Derman who brought the ideas floating around physics in the 70's and 80's to Wall Street, and in the process came to embody the word "quant."

Writing in the New York Times ("They Tried To Outsmart Wall Street" March 9, 2009) , Denis Overbye observed:

Dr. Derman, who spent 17 years at Goldman Sachs, and became managing director, was a forerunner of the many physicists and other scientists who have flooded Wall Street in recent years, moving from a world in which a discrepancy of a few percentage points in a measurement can mean a Nobel Prize or unending mockery to a world in which a few percent one way can land you in jail and a few percent the other way can win you your own private Caribbean island.

They are known as "quants" because they do quantitative finance. Seduced by a vision of mathematical elegance underlying some of the messiest of human activities, they apply skills they once hoped to use to untangle string theory or the nervous system to making money.

Derman, Overbye noted, "fell in love with a corner of finance that dealt with stock options."

"Options theory is kind of deep in some way. It was very elegant; it had the quality of physics," Derman told him.

I recently sat down with Derman to ask about his thoughts on the financial crisis, the role played by Goldman and the other big banks, and what new questions we need to ask to get our heads around the big problems which, to some, seem intractable and unsolvable.

Concerning the last point, Pagels was on to something, when, in his 1988 book Dreams of Reason: The Rise of the Sciences of Complexity, he wrote:

Mathematicians and others are endeavoring to apply insights gleaned from the sciences of complexity to the seemingly intractable problem of understanding the world economy. I have a guess, however, that if this problem can be solved (and that is unlikely in the near future), then it will not be possible to use this knowledge to make money on financial markets. One can make money only if there is real risk based on actual uncertainty, and without uncertainty there is no risk.

John Brockman

EMANUEL DERMAN, a former managing director and head of the Quantitative Strategies Group at Goldman Sachs & Co, is a professor in Columbia University's Industrial Engineering and Operations Research Department, as well as a partner at Prisma Capital Partners. He is the author of My Life As A Quant.

Emanuel Derman's Edge Bio page



[EMANUEL DERMAN:] One of the things I've been thinking about a lot, both in relation to the financial crisis and in relation to the way people understand the world in general, is the role of models in the world. There are a variety of different approaches to trying to understand the world, in all its facets, from the physical sciences to the social sciences and even one's personal life. I've categorized them in two ways: I like to distinguish what are called "theories" from "models". Theories, in my view, really try to capture the essence of the world, as in physics in one short equation, or in other fields, in one short schema.

It seems to me you can't really act in the world without having some kind of model or theory of how the world is going to behave in the future.

Models are simpler to describe in that they are similar to metaphors or analogies: you try to understand something that is difficult to comprehend in terms of something else you already comprehend. You try to understand the brain, for example, and you say, well, the brain is a lot like a computer. Or you try to understand a computer, and you assume people understand the brain and then say a computer is a lot like a brain.

In the same way in finance one says stock prices behave a lot like smoke diffusing off the tip of a cigarette . These are models or metaphorical ways of describing the world that add insight but you can't really rely on them very substantially in the long run. I'll give some examples in a little while.

The other extreme is to use theories, which are really ways of directly apprehending the way the world or the universe works: examples are Freud, Einstein and Newton. Of course, theories can be right or wrong, but theories are different from models.

Take the Dirac equation, for example, the most famous and most successful equation in physics. Dirac started out with a theory, produced a metaphor, and then turned it into a theory again, as I said, the most successful one. He began by trying to combine quantum mechanics, which was already a well established theory in late 1925, and relativity, special relativity, which had been around since 1905. The two theories weren't really compatible.

The Schrodinger equation which explains the hydrogen atom and all of chemistry is not relativistically invariant. Dirac, who was a big proponent of beauty, struggled very hard to elegantly unify the two and eventually wrote down one simple, literally one-inch-long equation. When he started to solve it, he discovered that it does miraculously explain the fact that the electron has spin, which was something that was only discovered experimentally after 1925 or 1926. The Dirac equation has four solutions. Two of the solutions described pretty well the electron and the hydrogen atom that everybody knew at that point. But there were two more solutions that have negative energy. Weird.

Nobody could make sense of this for several years after Dirac wrote it down in 1928. Nobody could quite understand what these negative energy electrons really meant. Dirac eventually came up with a pretty metaphorical explanation. He was reluctant to give up on the beauty of the equation and concluded after some struggles that the only way to understand it is to imagine that the entire universe and the whole world and what he calls the vacuum is actually filled up with negative energy electrons that you can't see and you don't sense. Sort of in the way when you're born you don't smell the air even though there is air around you, because it's the familiar background in which you live.

In the same way all the negative energy electrons that you are born into form part of the vacuum and nobody ever detects them under normal conditions. But then he realized that if you were to shoot photons or light into "empty "space, empty space isn't really empty. It's full of these negative energy electrons that you don't normally perceive. You can do a photoelectric effect on the vacuum, on empty space: shoot one photon in and hit an invisible negative energy electron, give it enough energy to make it positive and it becomes visible. 

You will kick out what was formerly a negative energy electron and convert it into one with positive energy which everybody will observe as a normal electron, and what it will leave behind is a hole in the sea of negative energy electrons. This sea is called the Dirac sea. It is a metaphorical description of the way the vacuum works.

Dirac realized that a hole in this negative energy sea is an absence of negative charge and will behave like a positive charge and have exactly the mass of the electron but the opposite charge.

Except for Maxwell's completion of Maxwell's equations for electromagnetic fields, this is the first case where somebody literally plucked an equation out of their head and ended up predicting the existence of a new particle or a new phenomenon, which has been the style template for theoretical physics ever since.

Sure enough in 1931 or 1932, though nobody really believed Dirac, Andersen at UCLA or Caltech discovered the existence of positrons, which are positively charged electrons. All of this is described by Dirac's equation with incredible accuracy, nowadays to about ten significant figures. That to me is a good example of a theory.

Maxwell's equations, which describe electricity and magnetism are another example. What they illustrate about a theory is that nobody thinks light is "like" Maxwell's equations. There is an absolute identity between the way we think about light and the equations that describe it. Nobody says Maxwell's equations are a "model" for light. Maxwell's equations and light are the same thing. The same way with the electron: the Dirac equation and the electron are literally inseparable. That for me is a good example of a theory. I'll give some more soon when I talk about the social sciences.

Let me step aside and talk about the difference between theory and model. I was brought up reading the Torah and going to Hebrew school, and there is the story of Moses and the burning bush. God tells Moses to go to Pharaoh to Let My People Go. Moses doesn't like being sent to do this and he doesn't speak very well and so he runs away into the desert of Midian to evade his responsibility. Eventually he comes across a burning bush and a voice speaks to him from the burning bush and tells him to go to Pharaoh and tell him to free his people.

Moses tries to wriggle out of it and says, who shall I say sent me? The voice from the burning bush says, I am that which I am, which is a sort of pun on the Hebrew word Jehovah for God. I like to think of that as being the example of a theory: God in the story isn't saying "I'm a lot like this" or "I'm a lot like that." I'm absolutely exactly what I am and not like anything else, he says, and that is kind of true of the quality of theories. You're not comparing yourself to anything. You're saying this is the way I behave.

If I can give an example that I have been interested in from a different, more qualitative aspect: I have been reading Spinoza's "Ethics". It's a lot like Freud. He's trying to explain the behavior of human emotions and eventually derives a theory of ethics out of all of this. It's also very astoundingly similar to the theory of derivatives in finance in that he says there are three underliers, which are desire, pain and pleasure. Spinoza's avowedly trying to do a version of Euclid, who defines points and lines and then derive theorems about triangles. So analogously Spinoza defines the primitives, which are pleasure, pain and desire, plus a few others like wonder and contempt, which I'll mention later perhaps, but the three I mentioned are the ones that, even though he defines them, you don't really need him to. Everybody who has lived and who speaks English understands what points and lines are, and so do they understand what pleasure, pain and desire are.

Then he starts to define in a self consistent way all the emotions that people experience as derivatives of pleasure, pain and desire. I actually made a table of them which I could show. I drew a dependency chart. So, for example, he says love is pleasure associated with an external object. Hate is pain associated with an external object. Then he gets to more complex derivatives, which are like convertible bonds: envy is pain at somebody else's pleasure. He doesn't talk about Schadenfreude but clearly Schadenfreude is pleasure at somebody's pain and envy is the other way around. Cruelty, for example, is a triple derivative, a desire to inflict pain, on someone that you love.

In this way he builds up a categorical description of almost every single emotion you can name in the way they relate to the primitives, desire, pleasure and pain.

I like it because : A) it leads to a theory of ethics and B) there is no reference to anything outside of itself. He doesn't say the brain is like a computer. It's totally self contained. In a sense it's a theory of the way things are, not a model of saying this is a lot like something else.

To give the opposite example, let me distinguish everything I've just said about theories from models. Models really depend on analogies. So, for example, in physics, look at the liquid drop model of the nucleus for which Bohr and Mottelson got the Nobel Prize. Bohr is the son, Aage Bohr, of Niels Bohr. The nucleus which really consists of protons and neutrons, say, in uranium, all jammed together very tightly, you can instead approximately think of as a liquid drop, and if you think of all these things jammed tightly together as liquid drop, then the drop can oscillate and can vibrate and can rotate. If you know its mass and you figure out roughly what its elasticity is, you can figure out what the normal modes of vibration are when it oscillates. Bohr and Mottelson end up predicting other excited states of uranium or of heavy nuclei based on this analogy of the drop.

But it really is an analogy, and a limited one. It's not saying the nucleus is a liquid drop. It's saying, in a range of energies if it doesn't break apart, it's a convenient way to think about it. That's very different from something like the Dirac equation where you're writing down an equation and saying, this is the way the world is, this is not an approximate version of the way the world is.

I've been thinking about this a lot in relation to finance because it seems to me true of financial models. The ethics in Spinoza is a theory of psychology and maybe Freud is too, in a self consistent way, but most financial models are metaphors and are based on saying you can picture stock prices as smoke diffusing or as smoke diffusing and jumping. It's not a holistic description. It's just saying that I can understand this if I think about it as much like something else I already understand in another context.

I've become a bit of a Platonist. What I like about Spinoza is that he's unlike most people I know who are monists and like to explain the world mostly in terms of matter. So, they like to say the brain is a computer or love is equivalent to a set of neural currents in your brain.

Spinoza won't give primary power to mind or matter. He believes, even if I say it somewhat clumsily, that there is a mind side to everything and there is a physical side to everything. Neither the physical side causes the mental side nor the mental side causes the physical side but they live in parallel to each other. I like that approach; it seems to agree with my experience of the world.

It doesn't explain much about the things that matter to you as a human being to say they happen because of circuits in your brain. I'm not saying that circuits don't fire in your brain, but it doesn't give you a way of dealing with it other than a very mechanical and uninsightful sort of way.

What I like about Spinoza's theory of emotions is that he tries to deal with everything in human terms, in a self-contained sort of way. That's not to say that there isn't an electronic and chemical correlate to love or envy or that there isn't something pounding in your heart when you're anxious. But that's not the part that matters when you're trying to deal with it as a human being.

My background plays a role in my interest in these ideas. I'm originally from South Africa and when I was in high school I liked literature and writing as well as science. Then, as I was good at science and I liked it, I eventually gravitated into that and I gave up writing, but in my heart I liked philosophy and literature.

I started out in physics and graduated from Columbia in 1973. Then I was a post-doc and assistant professor doing theoretical particle physics for seven years. Then at some point the difficulty of getting jobs in cities I wanted to be in got to be too much for me and I took a job at Bell Labs, which in 1980 was the canonical way out of academic life for physicists. What Wall Street is to physicists today, Bell Labs and the Solar Energy Research Institute and places like that were for physicists - telecommunications or energy research as a result of the energy crisis of the late Seventies.

I took a job at Bell Labs not in basic research but in a Business Analysis Systems Center, which literally was a bunch of ex-rocket scientists who had worked for Bell Labs on some of the moon shots. They were running a business analysis center to try and retrain themselves. I spent five years there in the sort of middle ground between academic life and real industry. I didn't like it very much, to tell the truth, because it was never quite clear what your aim was. They wanted you to write papers but you couldn't publish them and they were always secret in some sense. Whether something was a success or failure often just depended on whether your boss said he liked it or he didn't. A lot of it didn't see the outside world. I spent five years there learning a lot, nevertheless.

What I really learned, which was most useful for me coming to Wall Street in 1985, was computer science, computer science from a point of view of doing symbolic programming and building programs that people could actually use, little languages to model their own needs. I built a language there which we called HEQS, which stood for hierarchical equation solver. This was before the days of spreadsheets and analysts had to solve big financial models, but the people who created the models couldn't actually do the math to solve them, and my language gave them a way of them describing the model and then letting the computer provide the solution. I spent five years doing this kind of stuff. I learned a little bit about option theory but not much. I didn't like the management culture. I still wanted to be a person who worked with his hands, and everybody there, except in the research area, was aspiring to get into management.

Eventually Wall Street came knocking at the door, as a result I believe of rising interest rates in the late '70s. Wall Street suddenly started having a lot more trouble managing their inventory when interest rates became a risky business. They were hiring more and more computer people and applied mathematicians, physicists. I took a job at Goldman Sachs in late 1985. I wasn't quite the first of the people who went from physics to finance or the first of the quants, but I was among the early group. It was very exciting because Goldman was small in those days, maybe 5,000 people. A few years earlier it had probably only been 2,000 people. So you got to know everybody and see them in the cafeteria and it was intimate in a good way.

There was a very close linkage between people who were doing technical work and people who were trading or doing sales. There weren't a lot of barriers to dealing with different people. It was a place that valued you if you had a skill, no matter what it was, if you were a good lawyer or if you were a good computer programmer. They might treat you as a geek if you were more of a scientist than a businessman or an MBA or a lawyer. Nevertheless they needed what you had and they respected it. So I really enjoyed working there. For me it was a shot in the arm after being at Bell Labs and having felt like I had quit physics. I suddenly got excited again about doing something new.

In terms of how physics figured into Wall Street at that point, I was among the first physicists there. I don't know if I was literally the first, but I was certainly among the first few, although there had been three of four engineering people in the group I was in who had been there a few years longer.

It was kind of a natural match for physicists because first of all options and interest rates were becoming big in terms of sales and marketing and hence valuation and hedging were necessary. Most of the models that had been developed in the financial world for treating the risk of bonds or the risk of options or valuing options were all essentially diffusion models, related to diffusion of heat in classical physics. Physicists spend their life doing this kind of stuff, so even if they didn't know much finance, it was very easy. In fact, when I came, the guy I worked for said to me, read this paper by Cox, Ross and Rubinstein over the weekend and then start trying to fix this program that I wrote for valuing options which seems to have some problem for bond options rather than stock options. I literally spent the week reading this paper and learned economics out of it.

Now Wall Street is much more sophisticated. The hurdle is higher. You really have to know something before you start. But in those days it was enough just to be a reasonably smart person who was willing to learn. So I leapt into it. There weren't a lot of textbooks. It was very exciting to be in a field where there wasn't much traditional stuff to learn and to study.

Although it was economics, the mathematics was very similar to that of physics, and physicists are kind of jack of all trades in that they can do modeling, they can do mathematics, they can do numerical analysis and they had to do their own programming pretty much. They were not like business people who needed somebody they could give the programming to.

To build a model of options — there are a lot of little things that can go wrong. If there is a gap between the person who understands the model and the person who does the implementation, then a lot of little things can go wrong which you have an incredibly hard time rooting out because the person who understands the theory can't implement it and the person who understands the implementation can't understand what might be wrong when you get some mistake.

So I liked being the person who spans both sides of their bridge. In the culture I worked in, everybody did their own programming. When I ran groups for the next 15 years at Goldman, it was pretty much like that too. The physicists or the computer scientists– it was mostly the physicists — knew enough programming to do their own dirty work. In my case I actually built interactive screen interfaces for traders. I had learned how to do this at Bell Labs. It was a good way to work. It was good for a physicist in that you got to span many different areas. If you were the person who actually built the model as well, it gave you a great deal of close contact with the traders. They used it and you were the guy who controlled their access to it. It was really a perfect job.

It was a very interesting non-management, non-managerial culture, unlike Bell Labs. It was in many ways much like a university research department where I worked, except it was a business. I would say the fixed income bond options group that I worked for when I started was more like a hedge fund in terms of culture. There were a few smart traders, some smart sales people, and then our group that supported them. We all spoke every day and worked pretty closely. You didn't need a lot of managerial permission to start some project. Everything was a little bit fly by the seat of your pants. You would talk to somebody and then go off and do something. That was what I liked about the culture, and what I didn't like towards the end of my time on Wall Street was that you could spend more time asking for permission to do something than it actually took to do it, when it became more managerially oriented.

In terms of what physicists brought to Wall Street — really good pragmatic modeling skills. The difference between being an economist and being a physicist is that most economists have never really seen a successful model. So they don't know what constitutes a good model and a bad model. They either denigrate models too much or they respect them too much and think they are much better than they are.

Physicists, going back to what I said earlier, know the difference between a really accurate theory and between a more or less pragmatic model and they understand where to make approximations and what not to take too seriously. It's that sort of understanding of how much theory is useful, but not too much, is one of the skills that physicists bring. The second is really a hands-on approach to doing things yourself.

Later on I helped run risk management. There are two levels of risk management in a trading firm. One level is desk by desk, where you're working, say, as I did, on an equity trading desk between 1990-2000. You have thousands of positions and they move around every minute as the market changes, and you want to understand how exposed you are to volatility to the S&P, to various market factors, to interest rates. You want to know that on a daily basis. You care what's going to happen to you every minute because you want to hedge your potential losses.

That's very engaging and very hands-on. What happened in the late '90s, maybe mid-'90s, is people started to tackle what is now called firm-wide risk, which is looking at the risk of the whole enterprise, where you want to understand not so much what will happen to you at any instant, but where the big market risks lie for the firm as a whole. It's a global approach. This is where the whole of the Value-at-Risk culture comes in. That's a much fuzzier business. The difference between "local" and "global" is one of the interesting things in life, in physics and in financial models.

It's interesting in this regard that the three biggest banks recently announced their quarterly results and not only day in the last quarter did they lose money. That is totally astonishing. It's really a reflection of what the administration has done. They've made interest rates very low so it's cheap for the banks to borrow money. They have eliminated a lot of the players, so there is much less competition for taking on risky trades and you can do them at a price that is much more preferable to the person who is in control. This is a risk profile that is a result of regulation and administrative policies rather than of genuine market conditions.

In terms of styles of regulation, I'm very disillusioned by what's happened in terms of the bailout. I don't know what is the right thing to do but one of the worst things for society's ethical sense is to see other people having the upside of risky positions and not suffering the downside. It makes me feel very uncomfortable.

What I also dislike is that firms that have made a lot of money out of this won't acknowledge that they made this money by being saved by the taxpayers and the administration.

There's a lot of talk about the role of algorithms and the change in markets. The financial world has changed a lot since I worked in it and the biggest change is more people are playing with more of other people's money. When most of the banks were partnerships, they had to be in it for the long run because people who were partners were playing with their own capital and taking risk with their own assets. Their money was tied up for 10 or 15 years. Even if somebody retired, they still couldn't take their money out of there. They just got paid interest while it was being used and drawn down. So there was a certain culture of not taking extreme risks because you didn't really have limited liability. Ultimately you could be broken completely by your company going bankrupt. With trading houses going public, they're playing with other people's money. They're immediately liquid in terms of stock and cash payment. The culture in all of these places has changed in that it's make money liquid and fast. The way this crisis has been treated exacerbates that attitude in that if you do badly, the government bails you out and if you do well, you keep the profits. 

I used to hear 10 years ago at Goldman from colleagues that there was going to be doom one day at Fannie Mae and Freddie Mac because they were hedge funds in disguise. To some extent the government and regulators have encouraged this and they still haven't tackled the problems at Fannie Mae and Freddie Mac and are doing with them what they accuse Wall Street banks of doing, which is treating them as off-balance sheet and not counting the money they are spending on them as real money.

In terms of algorithmic trading, that's a big change too. I'm not against it — it's inevitable from a technology point of view. You trade airline tickets with computers. You buy things off the internet. There is no way people are going to trade stocks in vast amounts by making verbal or written orders. Stocks are going to be traded electronically and eventually bonds, currencies and everything else will be traded electronically too.

It's unfair, though, to allow high-frequency traders to get what essentially amounts to insider trading, to getting an early look at trades and deciding what to do because they are allowed to put powerful computers closer to the stock exchange. That doesn't make it a flat playing field.

Also, people who benefit from it tend to over-accentuate the need for efficiency. Everybody who makes money out of something to do with trading tends to say, oh, we're got to do this because it makes the market more efficient. But a lot of the people who provide this so-called liquidity and efficiency are not there when you really need it. It's only liquidity when the world is running smoothly. When the world is running roughly, they can withdraw their liquidity. There is no terrible need to be allowed to trade large amounts in fractions of a second. It's kind of a self-serving argument. Maybe a tax on trading to insert some friction isn't a bad idea, just as long term capital gains are taxed lower than short term gains.

Economics is a strange field. One of the things I noticed on Wall Street was that firms use the economists to talk to clients but their trading desks don't necessarily pay attention to what the economists are saying. Unexpected things happen unexpectedly and damage positions and net worths. I don't think there is a good quantitative solution to all of this. I sometimes get letters from mathematicians in Europe saying that they have come up with a better formula for capturing risk or for valuing risk or for trying to control or measure risk. You can do better than VaR but there isn't one formula, one number, that is going to save you in the end.

More important is incentives and disincentives and making sure that people understand they are going to pay the penalties for their own mistakes and somebody isn't going to bail them out. Jim Grant, who writes a newsletter called "Grant's Interest Rate Observer" that I like, had a column recently pointing out that in Brazil they haven't had a big banking crisis and that there, anybody who runs a trading firm is personally responsible for losses. It's not company risk. It comes down to their own assets. So they are much more cautious about this. Those kinds of incentives are going to make a much bigger difference than finding a better mathematical formula for handing risk.

And the scale at which people get paid has become quite astonishing. There is an increasing gap in America in general between what people make at the bottom and what people make at the top.

When I decided to work on Wall Street, I interviewed in '83. The guy who interviewed me said, this is one of the few jobs where you won't have to be an accountant or a lawyer and you can make $150,000 a year eventually. Now a trader might make $20 million. If they can't make it at an investment bank, they go to a hedge fund, if they have a really good track record.

I have less of a pay problem with hedge funds — I'm not sure if I'm right — from an ethical point of view than I do with very big, too-big-to-fail companies because hedge funds are by and large putting their own money or their clients' money at risk, and it's a clearly articulated compact between them. While there is a possibility of systemic contagion, it's a cleaner business in terms of potential conflicts. They are just doing proprietary trading for their own account or their clients' account.

Whereas, what is confusing about the big investment banks, if you watched the Senate hearings, is that there is a very unclear overlap between being a producer and being a market maker. So Goldman, for example, always used to be pretty much a service provider in the old days. Now they and all the other big banks want to be both a producer and the marketplace on which product trades. There are a many conflicts of interest. To make an analogy, I've read articles about how Amazon wants to be not just a place that sells books but one that also publishes books. That becomes dangerous — if you're a dominant player in both the conduit and the content. There is too much concentrated power. It used to be you were either a market maker or a producer.

In terms of crises: For the last few weeks we've seen the Euro drop, riots in Greece, and bailouts by the IMF and by the Central European Bank. As long as countries in Europe had separate currencies they could use currency devaluation to try to manage their own problems if they weren't being productive or were spending too much. But once you have the same currency but you don't have the same political identity and the same political and economic interest, there is a substantial danger of things breaking apart.

In terms of trends in automated trading and market microstructure: Last week a 1,000 point drop in 40 minutes. It's scary but it's more of a technical event than an economic one. In 1987 the market dropped by 22 percent in a day and it was okay in the end. It didn't have economic consequences. In contrast in the last three years the subprime crisis started out as a localized trading problem and has ended up influencing the whole economy. There is unemployment as a result of the interaction between housing prices and contagion from market to market. Last week markets got scared by Greece and some computers misbehaved, which interacted with what is now a fragmented equity trading market. But in the end it's like the 2007 quant crisis. The market dropped but it didn't really affect too much about the economy, which is shaky for its own good reasons.

But it can be difficult to distinguish sharply between economic events and technical events because if people get scared enough it has economic consequences.

What to do? Frankly, if I were asked to become the Treasury Secretary of this administration, I don't think I would take the job. I used to think I knew what the right thing to do was. I thought there was too much back and forth between government and Wall Street. I thought it was a mistake for people who had been so close to Wall Street be in charge of it. On the other hand, given some of the events of the last six months or eight months, I no longer know what the right thing to do is. There is too much back and forth, but then markets are a complicated business.

Watching the Senate hearings the other day and seeing Carl Levin and others interrogate the people from Goldman Sachs on the Abacus deal, I realized that most of the Senator questioners didn't seem to understand too well how markets work.

They didn't understand the difference between trading as principal and trading as agent. On the one hand, war is too important to be left to the generals. On the other, it's very hard to make sensible regulations without sufficient knowledge. And it should be noted that some of the Senate questioners, who were right to tackle Wall Street, are the same people who won't do anything about Fannie Mae and Freddie Mac and the same people who encouraged the mortgage deduction, easy access to housing for poor people. So I'm now a little bit more sympathetic to the idea that you have to have people who have been in the business do the regulation, or at least people who have spent enough time understanding it.

Watching that interrogation of the bankers at the Senate hearings, I had the feeling that this is the way karma works in the universe. Everybody is going to do something not quite right as they act out their destiny mechanically, doing what they unthinkingly believe they have to do. The Wall Street people are going to reflexively overshoot and be too greedy. The Senate people are going to reflexively grandstand and be too uninformed and try to rein them in. There isn't going to be an elegant solution to any of this. That's the way of human affairs, and in terms of leadership, perhaps the best we can hope for is that occasional, miraculous, moment when people who are in a position to make a difference cease to behave mechanically — to take some recent examples, Mandela and de Klerk, perhaps Gorbachev — and who, rather than fulfilling their preprogrammed destiny, break the cycle of karma.


Edited by John Brockman

"An intellectual treasure trove"
San Francisco Chronicle

Edited by John Brockman

Harper Perennial


[click to enlarge]

Contributors include: RICHARD DAWKINS on cross-species breeding; IAN McEWAN on the remote frontiers of solar energy; FREEMAN DYSON on radiotelepathy; STEVEN PINKER on the perils and potential of direct-to-consumer genomics; SAM HARRIS on mind-reading technology; NASSIM NICHOLAS TALEB on the end of precise knowledge; CHRIS ANDERSON on how the Internet will revolutionize education; IRENE PEPPERBERG on unlocking the secrets of the brain; LISA RANDALL on the power of instantaneous information; BRIAN ENO on the battle between hope and fear; J. CRAIG VENTER on rewriting DNA; FRANK WILCZEK on mastering matter through quantum physics.

"a provocative, demanding clutch of essays covering everything from gene splicing to global warming to intelligence, both artificial and human, to immortality... the way Brockman interlaces essays about research on the frontiers of science with ones on artistic vision, education, psychology and economics is sure to buzz any brain." (Chicago Sun-Times)

"11 books you must read — Curl up with these reads on days when you just don't want to do anything else: 5. John Brockman's This Will Change Everything: Ideas That Will Shape the Future" (Forbes India)

"Full of ideas wild (neurocosmetics, "resizing ourselves," "intuit[ing] in six dimensions") and more close-to-home ("Basketball and Science Camps," solar technology"), this volume offers dozens of ingenious ways to think about progress" (Publishers Weekly — Starred Review)

"A stellar cast of intellectuals ... a stunning array of responses...Perfect for: anyone who wants to know what the big thinkers will be chewing on in 2010. " (New Scientist)

"Pouring over these pages is like attending a dinner party where every guest is brilliant and captivating and only wants to speak with you—overwhelming, but an experience to savor." (Seed)

* based On The Edge Annual Question — 2009: "What Will Change Everything?)

Edge Foundation, Inc. is a nonprofit private operating foundation under Section 501(c)(3) of the Internal Revenue Code.