Edge 129—December 4, 2003

(12,300 words)



NEW PILLS FOR THE MIND
A Talk with Samuel Barondes, M.D.

video

Most of the psychiatric drugs we use today are refinements of drugs whose value for mental disorders was discovered by accident decades ago. Now we can look forward to a more rational way to design psychiatric drugs. It will be guided by the identification of the gene variants that predispose certain people to particular mental disorders such as schizophrenia or severe depression.


Re: WHY GORDIAN SOFTWARE HAS CONVINCED ME TO BELIEVE IN THE REALITY OF CATS AND APPLES A Talk with Jaron Lanier

A continuing Reality Club discussion is underway concerning Jaron Lanier's provocative recent feature. This thread is an example of Edge's interest in "thinking smart" vs. in the anesthesiology of "wisdom."

In the early '80s, the late Heinz Pagels and I wrote the following about The Reality Club:

"We charge the speakers to represent an idea of reality by describing their creative work, their lives, and the questions they are asking themselves. We also want them to share with us the boundaries of their knowledge and experience and to respond to the challenges, comments, criticisms, and insights of the members. The Reality Club is a point of view, not just a group of people. Reality is an agreement. The constant shifting of metaphors, the intensity with which we advance our ideas to each other — this is what intellectuals do. The Reality Club draws attention to the larger context of intellectual life.

"Speakers seldom get away with loose claims. Maybe a challenging question will come from a member who knows an alternative theory that really threatens what the speaker had to say. Or a member might come up with a great idea, totally out of left field, that only someone outside the speaker's field could come up with. This creates a very interesting dynamic."

For "a very interesting dynamic" read the challenges to Jaron Lanier's ideas from a range of interesting thinkers. As one Edge participant remarked: "Tough crowd."

Responses by Dylan Evans, Daniel C. Dennett, Steve Grand, Nicholas Humphrey, Clifford Pickover, Marvin Minsky,
Lanier replies, George Dyson, Steven R. Quartz, Lee Smolin, Charles Simonyi


NEW PILLS FOR THE MIND
A Talk with Samuel Barondes, M.D.

Introduction

Psychiatrist Samuel Barondes, M.D. is interested in the ways that chemicals influence mental processes. "Despite decades of tinkering," he notes, "the drugs we presently use still have serious limitations. First, they don't always work. Second, they still have many undesirable side effects. Instead of continuing to invest in more minor improvements, pharmaceutical companies are becoming interested in a new approach to psychiatric drug development." In this discusion he traces the accidental discovery of LSD by Albert Hoffman in 1943 as a contribution to a milieu that favored the discovery of many psychiatric medications.

For example, he notes that "in the course of just a few years there were these two discoveries of extremely valuable psychiatric drugs that radically changed the practice of psychiatry. Before the discovery of chlorpromazine and imipramine disorders like schizophrenia and major depression were usually dealt with by talking, exhortation, and hospitalization—and with limited success. With these new drugs many patients had remarkable improvements."

There are new approaches that take advantage of the fact that there are genetic vulnerabilities to mental disorders. "The hot new technologies that psychiatric scientists are now using," he says, "include not only genetics but also brain imaging...It will be possible to correlate knowledge about genetic variation with knowledge about how specific brains operate in specific circumstances, as looked at with various kinds of functional magnetic resonance imaging. Right now our ideas about mental disorders are mainly based on interviews, questionnaires, and observations of behavior. Being able to look at what's going on inside the human brain, once considered to be an inscrutable black box, is turning out to be quite informative."

JB

SAMUEL H. BARONDES is the Jeanne and Sanford Robertson Professor and director of the Center for Neurobiology and Psychiatry at the University of California, San Francisco. He is past president of the McKnight Endowment Fund for Neuroscience and recently served as chair of the Board of Scientific Counselors of the National Institute of Mental Health.

Barondes is interested in psychiatric genetics and psychopharmacology. He is the author of Molecules and Mental Illness; Mood Genes: Hunting for the Origins of Mania and Depression; and Better Than Prozac: Creating the Next Generation of Psychiatric Drugs.

The idea that animals can be used to study mental illness strikes many people as strange, because human behavior seems unique. After all, only humans use language for introspection and long-range planning, and it is just these functions that are disturbed in many psychiatric disorders. Nevertheless, we have enough in common with other animals to make them very useful for studies that can’t be done with patients.

Of the animals used for this purpose, apes and monkeys have been favorites because they are our closest relatives. Dogs, too, have obvious human qualities. They may even display patterns of maladaptive behavior that resemble those in DSM-IV. For example, Karen Overall, a professor in the School of Veterinary Medicine at the University of Pennsylvania, has been studying a dog version of obsessive-compulsive disorder (OCD), which is fairly common in certain breeds.  Like people with OCD who each have their particular patterns of symptoms, individual dogs with canine OCD also have distinctive main symptoms such as tail-chasing or compulsive licking of their paws.  Like humans with OCD, the dogs tend to perform their rituals in private. And, like human OCD, the canine version responds to drugs such as clomipramine and Prozac. Because of these many similarities, Overall’s dogs may provide information about the human disorder—an aim that has already been achieved for another canine behavioral disorder that I will turn to shortly.

But despite their value for certain types of studies, primates and dogs are not ideal experimental animals.  Their main shortcoming is that they are costly to raise and maintain, which makes them impractical for the many experiments that require large numbers of subjects. For this reason scientists have been turning to a much less expensive alternative, the laboratory mouse. Although it is more difficult to empathize with these tiny rodents than with a chimpanzee or a golden retriever, we now know that all these mammals share much of our complex brain machinery. What makes mice especially attractive is that their genes are relatively easy to manipulate by traditional breeding methods and by the new techniques of genetic engineering. Both experimental approaches have been successfully employed to make special strains of mice that are being used to study mental disorders and to develop new psychiatric drugs.


Samuel Barondes' Edge Bio Page


NEW PILLS FOR THE MIND

Content on this page requires a newer version of Adobe Flash Player.

Get Adobe Flash player

(SAMUEL BARONDES): I'm interested in the ways that chemicals influence mental processes. Most of these chemicals are brain proteins whose structure and expression are controlled by genes. But some of them are simple chemicals such as serotonin and norepinephrine, which transmit signals in the brain, signals that are important for experiencing emotions. These simple chemicals are also of great interest because their actions in the brain are influenced by widely used psychiatric drugs, such as Prozac. It was, in fact, the accidental discovery of several mind-altering drugs in the middle of the 20th century that drew me into research on brain functions and mental illness.

The most legendary of these discoveries was made in 1943 by Albert Hoffman, a chemist who worked at Sandoz, a Swiss pharmaceutical company. While making a variant of a chemical that causes uterine contractions he noticed that his mind was playing tricks on him. Suspecting he had inadvertently swallowed a bit of the drug, Hoffman took a tiny measured dose. He was astounded to find that the drug changed his perception of common objects, intensifying their colors and altering their shapes. The drug he created is LSD.

In those days there were not many limitations on the pharmaceutical industry. So Sandoz distributed LSD to a variety of researchers in the hope that they would find a medical use for it. Attempts were made to use LSD as an aid to psychotherapy, but this application never caught on. And it did not take long before some of the drug that was provided to researchers began to be diverted for recreational use. As LSD's popularity grew more and more people began to realize that their minds could be controlled by minuscule amounts of certain chemicals. If anyone had any doubt about this, a dab of LSD would persuade them otherwise.

Although LSD did not prove medically useful, it contributed to a milieu that favored the discovery of many psychiatric medications. The first of these was synthesized in 1950 by Paul Charpentier, a chemist who worked for Rhone Poulenc, a French drug company. Charpentier had already made his mark by creating promethazine, a drug that blocks the action of histamine. Promethazine, which was marketed as Phenergan, is one of the early antihistamines that are still used to treat symptoms of allergies and colds. Charpentier's subsequent discovery of a revolutionary psychiatric medication was stimulated by the observation that antihistamines make you sleepy, which suggested that histamine plays a role in the function of the brain.

To people who use antihistamines to relieve allergic symptoms sleepiness is an undesirable side effect. But Henri Laborit, a French naval surgeon, turned this to his advantage by using promethazine as an adjunct to anesthesia. Once he discovered this application for promethazine he contacted the Rhone Poulenc Company. They asked Charpentier to make derivatives that are more effective in inducing sleep.

In 1950 Charpentier made a new derivative that seemed to fit the bill. Among the doctors who experimented with it were two French psychiatrists, Jean Delay and Pierre Deniker. They tested it on agitated patients in a psychiatric hospital and found that it would calm them down. To their great surprise they also found that some patients with schizophrenia who took the drug for several weeks stopped hearing nonexistent voices and stopped worrying about nonexistent plots against them. With continuous treatment many patients who had been hospitalized for years got progressively better. Promethazine, the antihistamine from which this new drug, chlorpromazine, was derived, had none of these antipsychotic effects. So there was some additional property of this new antihistamine that made it very different from the old ones. It not only made people sleepy, it also changed the thoughts of people with schizophrenia. Their paranoid delusions often disappeared, and their ability to relate to other people improved. This was an extraordinary discovery. By 1955, just a few years after the discovery of these properties of chlorpromazine, it became a blockbuster drug with the trade name Thorazine.

Stimulated by the great success of chlorpromazine, other drug companies began searching for drugs for schizophrenia. Among them was Geigy, a Swiss pharmaceutical company that was also making antihistamines, including one that looked very much like chlorpromazine. But when Geigy distributed their drug to psychiatrists they got disappointing results—their antihistamine was quite useless for schizophrenia.

Luckily for Geigy Roland Kuhn, one of the psychiatrists who got a supply of this drug, was interested in depression, and he gave it to people with severe depression. To his great surprise the depressions started lifting after a few weeks of treatment. By 1958 Geigy's antihistamine, called imipramine, or Tofranil, had become another blockbuster.

So, in the course of just a few years there were these two discoveries of extremely valuable psychiatric drugs that radically changed the practice of psychiatry. Before the discovery of chlorpromazine and imipramine disorders like schizophrenia and major depression were usually dealt with by talking, exhortation, and hospitalization—and with limited success. With these new drugs many patients had remarkable improvements.

Once these drugs were discovered a few scientists began to examine their effects on brain chemistry. Most of this research was done at the National Institutes of Health, rather than in pharmaceutical companies. Within the course of the next ten years it was discovered that both chlorpromazine and imipramine influence the actions of brain chemicals, called neurotransmitters, which transmit signals between brain cells. Chlorpromazine was found to block receptors for a neurotransmitter called dopamine, and imipramine was found to augment the actions of two brain chemicals, norepinephrine and serotonin. Both drugs have many other effects on brain chemistry, but their modifications of neurotransmission are believed to be responsible for their therapeutic actions.

Once these brain effects were discovered pharmaceutical companies set out to find new drugs that retain the therapeutic effects of chlorpromazine or imipramine but that lack their undesirable properties. Over the years they created a number of new drugs. For example, we now have a variety of new drugs for schizophrenia including Risperdal, Zyprexa, Seroquel, Geodon, and Abilify. All block dopamine receptors in the brain, a property they share with chlorpromazine. But each has other properties that make it a better drug. We also have a variety of new drugs that share certain properties of imipramine. Among them are Prozac, Zoloft, Paxil, Luvox, Celexa, Lexapro and Effexor. All are descendants of imipramine; and all are free of some of imipramine's undesirable side effects.

But, despite decades of tinkering, the drugs we presently use still have serious limitations. First, they don't always work. Second, they still have many undesirable side effects. Instead of continuing to invest in more minor improvements, pharmaceutical companies are becoming interested in a new approach to psychiatric drug development.

The new approach takes advantage of the fact that there are genetic vulnerabilities to mental disorders. For example, if you have a mother or a father with schizophrenia, your chances of developing schizophrenia are about nine times as great as that of other people. One out of every hundred people in America, or in the world, is schizophrenic, but nine out of a hundred children of a parent with schizophrenia develop this disorder, usually by early adulthood. Likewise, for people who develop severe depression early in life—before approximately age 20 to 25—the risk to their children is about four or five times as great as that to the rest of the population. This and other information point to a genetic vulnerability to these mental disorders. This doesn't mean that if you inherit the gene variants or the combination of gene variants that increase this vulnerability, you will inexorably develop the condition. Nonetheless, now that the human genome has been sequenced, and now that the variations of human genes are being tabulated, the stage has been set for the discovery of the gene variants that are responsible for such vulnerabilities. Finding these gene variants will lead to the identification of the brain circuits whose functions they affect. This, in turn, will point the way to the development of new medications that help to normalize these brain functions. Hopefully, drugs designed in this way will be better than the ones we now have.

Of course, all this remains to be seen. It's one thing to know that a biological variation exists, but it's not easy to figure out a way to alter the consequences of this variation with a drug. Furthermore, drugs designed in this way are just as prone to side effects as those that were stumbled upon in the past. Nevertheless, this new approach should eventually provide us with highly effective new medications for depression, schizophrenia, and the other prevalent mental disorders that plague mankind.

~~~

My interest in psychiatry didn't begin with an interest in brain biology. When I went to Columbia College in the 1950s I was mainly interested in studying human behavior. In the Columbia of that era the main players were Sigmund Freud and B.F. Skinner. Freud's influence was still very great, and his Civilization and its Discontents was required reading for freshmen. But in the psychology department of Columbia College Skinner's behaviorism ruled, whereas Freud was viewed with great suspicion. I became captivated by both types of ideas about behavior and the mind, but it seemed to me that Skinner's quantitative behavioral experiments were more likely to be productive than Freud's clinical observations and theories. There was this great battle between them. Skinner said all that Freudian stuff was baloney because it was just made up, and one could not test it experimentally. And the psychoanalysts said in response, "Look at all the interesting things we talk about; all you talk about is lever-pressing in rats." I grew up in that milieu. Those were the interesting issues in the mid-'50s.

I liked Skinner's work very much, but I also really liked chemistry and biology, and I had the sense that maybe these hard sciences could be applied to psychological problems. This led me to medical school, but I was soon disappointed because the psychiatrists at Columbia Medical School were in that era very dogmatically psychoanalytic, and not experimental, and not open to discussion. They were very orthodox in that period. So I transiently changed directions and turned to endocrinology. I became interested in hormones because hormones affect the brain, and I wound up doing some work in internal medicine. Then I did a postdoctoral fellowship at the National Institutes of Health. This was in the early '60s, when the National Institutes of Health was absolutely in its golden age. Through a series of lucky circumstances, I arrived as an endocrinologist and ran into a man named Gordon Tomkins, who was also an endocrinologist and one of these natural teachers and avuncular people who would gather young people around him. He became my mentor, and I remember he took me into his office and said, "You know, endocrinology is really molecular biology. It's all genes. What hormones do is regulate the function of genes." This was in 1960 or 1961 and his vision has been completely verified. This was before we had good tools to study them. He said, "If you're interested in this stuff, and you can work on the mind too, just go and study molecular biology."

Molecular biology was completely new. I didn't even know what it was. Remember, the Watson/Crick double helix was discovered in 1953, and this was 1960—not that long afterwards, and the double helix was not generally appreciated—and very few people were available to teach me how to become a molecular biologist. Fortunately Gordon had a young person named Marshall Nirenberg in his group, and Gordon suggested him. But Marshall was unknown at the time, and I said, "I want to work with you, Gordon." He said, "No, you can't work with me, because you don't know nothin', so go work with Marshall. He needs somebody in his lab, so work with him." Three weeks after I joined Marshall, he discovered that polyuridylic acid, a polynucletide which is a string of uridines, instructs the protein-synthesizing machinery to make an unusual protein made up of a string of phenylalanines. This suggested that the sequence u-u-u—uridine, uridine, uridine—codes for the amino acid phenylalanine, and this was the beginning of the deciphering of the genetic code. It is the key to understanding how the language of nucleic acids—the language of the genes—is translated into the language of proteins which control the functions of living things. As a complete novice of a doc, a knee-tapping, stethoscope-carrying doc, I suddenly found myself working on one of the great scientific problems of the time, the genetic code. About 6 or 7 years later Nirenberg won the Nobel Prize for the genetic code. And if you open up Albert's cell biology book, Molecular Biology of the Cell, the inside cover shows the genetic code in which u-u-u encodes phenylalanine and other strings of three nucleotides encode the other amino acid components of proteins. The genetic code has the same importance in molecular biology that the Periodic Table of the Elements has in chemistry. If you open any chemistry book there's the table of the elements; in molecular biology or biology books it's the genetic code.

All of a sudden I found myself—just completely by good luck—immersed in molecular biology and in the company of some really famous people. Soon Crick came to visit, and Watson came to visit. Gordon also encouraged me to follow my interest in the brain and in psychiatry, so I did a residency in psychiatry, knowing that I would not have to be a psychoanalyst because I now knew how to do science. I had been transformed into a young molecular biologist, and I could try to bring molecular biological approaches to research on mental illness. Through a series of accidental circumstances my career was set, and ever since then I've been trying to use molecular biological tools to solve problems that are relevant to psychiatry.

I am very interested in studies of the genetics of mental disorders because they take advantage of the accumulation of vast amounts of knowledge about human genetic structure and genetic diversity. The next simple-minded thing to do is to identify gene variants that influence particular behavioral propensities, and that's going to be doable in the next 5, 10, or 20 years. The basic sequence of the human genome is known. It's known that there are a small number of very common gene variants. What we need to do is correlate the various gene variants with various behavioral propensities. And as the ways of crunching genetic data are improved, as it becomes cheaper and cheaper to take DNA samples from large numbers of people and look at all the variants in each individual person, and as computers are available to integrate all that data, we're going to learn a lot about the genetic propensities towards different kinds of human behaviors. That's a certainty; that's definitely going to happen.

A lot of it comes down to economics. When sequencing DNA samples from large numbers of people becomes affordable—which will probably happen fairly soon, for research purposes at least—we will be in a position to learn a lot about gene combinations and the propensities to certain kinds of behavioral traits. Or we will find that we can't figure it out. That may happen too. It may turn out that although certain propensities are heritable they're so complicated—there are so many genes involved—that we can't say a great deal about it. I suspect that that's not always going to be the case. I suspect that there are going to be some important gene variants that are going to have significant functions in sorting out general directions of personality. It's an experimental question that will be settled one way or the other, and it's certainly the way to go. The technology is available, and becoming available, and its importance is tremendous. It's going to be important for understanding ourselves, and it's also going to be important clinically for helping people with various kinds of mental distress. There's going to be this huge repository of genetic variations and attempts to correlate them with certain behavioral propensities. And although it's going to be really complicated, and a lot of the stuff is going to be undecipherable because the changes in propensities might be one percent or two percent—stuff that's going to be hard to pick out of the noise or hard to say is really important—I suspect that there are going to be some genetic variations that have substantial effects on the risk of certain mental disorders.

The reason I have hope is the Alzheimer's story. You know Alzheimer's disease, the common kind that happens in old people—not the rare early-onset form, which is a Mendelian disease that happens to people before 50, but the very common older-onset form which usually begins to strike at the age of 70 or 80. The older-onset form is influenced, very importantly, by variants of one gene: the Apo-E gene. The Apo-E gene is a gene that encodes a protein that transports lipids in the blood. There are three variants that are each very common in the human genome—Apo-E 2, Apo-E 3, Apo-E 4. What's become clear is that if you inherit two Apo-E 4s, one from mom and one from dad, then your risk for Alzheimer's Disease by the age of 70 is about 50%. You have a huge increase in risk for this terrible, dementing illness, this change in your personality and cognitive ability. It's not the whole story—there are people with two Apo-E 4's who live to be 90 and don't get Alzheimer's Disease—but it's an example of a common gene variant, which if you happen to inherit it, especially in two copies, significantly increases your risk of a huge mental change in the progress of life.

That may turn out to be an unusual discovery; that is, it may turn out that none of the genes that control other important psychological oddities—like becoming schizophrenic or manic depressive—have the same propensity to do this that the Apo-E 4 gene does for Alzheimer's Disease. Their power, the percentage variance that they are responsible for, may be significantly less, but there are going to be some gene variants that are going to turn out to be really important in increasing the risk of developing such mental disorders, and the hope is that we're going to learn a lot from them—among other things about how to make new drugs. Once you find a gene you're going to be able to find out which enzyme, structural protein, or regulatory protein it codes for, which in turn gets you into a functional biological pathway, which in turn can help you design a drug.

The story of Alzheimer's Disease is the story of showing how one very common gene variant has powerful effects on one form of behavioral problem, albeit an odd one, one that progressively occurs in old age, but one that we're all very interested in. Apo-E 4s is not a rare, weird mutation. It's a very important variant of a gene that everybody has. This gives me hope that there will turn out to be other common gene variants that make people prone to things like depression. There may be really powerful ones that account for a significant part of the variance. Knowing about those will really help us a lot in diagnosis, and also in designing new medication. It remains to be seen how well this works out, but the neat thing about the genome and the gene variant stuff that's being explored now is that there's going to be this pile of information and it's going to be analyzable, because computer technology is such that it can be parsed out. We're going to learn a lot. How useful it's going to be remains to be seen, but let us hope that it will be very useful.

The hot new technologies that psychiatric scientists are now using include not only genetics but also brain imaging. Brain imaging has brought another dimension to studies of human brain functions because you can, in real time, look at brain regions that are active in certain kinds of mental processes, and can look at differences in different people—that is, people with different conditions or propensities—to see in real time how their brains might be operating. There are going to be opportunities to use knowledge about specific gene variants that can be tied to this imaging, so that one can look at more than just measures like behavior that you assess with conversation or questionnaires. It will be possible to correlate knowledge about genetic variation with knowledge about how specific brains operate in specific circumstances, as looked at with various kinds of functional magnetic resonance imaging. Right now our ideas about mental disorders are mainly based on interviews, questionnaires, and observations of behavior. Being able to look at what's going on inside the human brain, once considered to be an inscrutable black box, is turning out to be quite informative.


Re: WHY GORDIAN SOFTWARE HAS CONVINCED ME TO BELIEVE IN THE REALITY OF CATS AND APPLES A Talk with Jaron Lanier


Responses by Dylan Evans, Daniel C. Dennett, Steve Grand, Nicholas Humphrey, Clifford Pickover, Marvin Minsky, Lanier replies, George Dyson, Steven R. Quartz, Lee Smolin, Charles Simonyi


Dylan Evans

I was saddened to see Edge publish the confused ramblings of Jaron Lanier (Edge #128). I offer the following comments with some hesitation, as they may serve to endow Lanier's nonsense with an importance they do not deserve:

1. Lanier's main objection to the work of Turing, von Neumann, and the other members of `the first generation of computer scientists' seems to boil down to the fact that they all focused on serial machines (sequential processors), while Lanier thinks that surfaces (parallel processors) would have been a better starting point. This completely misses one of the most important insights of Turing and von Neumann - namely, that the distinction between serial and parallel processors is trivial, because any parallel machine can be simulated on a serial machine with only a negligible loss of efficiency. In other words, the findings of Turing and von Neumann apply to both serial and parallel machines, so it makes no difference which type you focus on. At one point, Lanier seems to admit this, when he states that `the distinction between protocols and patterns is not absolute - one can in theory convert between them', but in the very next sentence he goes on to say that `it's an important distinction in practice, because the conversion is often beyond us'. The latter sentence is false - it is incredibly easy to simulate parallel devices on serial machines. Indeed, virtually every parallel device ever `built' has been built in software that runs on a serial machine.

2. Lanier claims that parallel machines are somehow more biological or `biomimetic' than serial machines, because 'the world as our nervous systems know it is not based on single point measurements, but on surfaces'. Unfortunately for Lanier, the body is an ambiguous metaphor. True, it has surfaces - sensors that are massively parallel - such as retinas (to use Lanier's example). But it also has wires - sensory systems that are serial - the clearest example of which is hearing. Indeed, the fundamental technology that enabled human civilisation - language - first arose as an acoustic phenomenon because the serial nature of language was most easily accommodated by a serial sensory system. The birth of writing represented the first means of transforming an originally parallel modality (vision) into a serial device. In fact, progress almost always consists in moving from parallel devices to serial ones, not vice versa. Even the `biomimetic robots' that Lanier admires are serial machines at heart.

3. Lanier waxes lyrical about his alternative approach to software, which he dubs 'phenotropic'. But he fails to say whether this software will run on serial machines or not. If it will, then it won't represent the fundamental breakthrough that Lanier seems to think it will. If it won't run on serial processors, then where is the parallel machine that it will run on? Until Lanier can produce such a parallel machine, and show it to be exponentially faster than the serial machines we currently have, his claims will have to be regarded as the kind of pie-in-the-sky that he accuses most computer scientists of indulging in. Real computer scientists, of course, do not really indulge in pie-in-the-sky. The reason that some of them talk about 'ideal computers' rather than 'real computers as we observe them' has nothing to do with a tendency to fantasise, as Lanier implies. Rather, it is because they are interested in discovering the laws governing all computers, not just the ones we currently build.

Best wishes,

Dylan

DYLAN EVANS is Research Officer in Evolutionary Robotics, Centre for Biomimetics and Natural Technology, Department of Mechanical Engineering,  University of Bath. His book, Introducing Evolutionary Psychology, was required reading for the main actors in "The Matrix".


Daniel C. Dennett

I read Dylan's response to Jaron's piece, and Dylan has it right. I'm not tempted to write a reply, even though Jaron has some curious ideas about what my view is (or might be—you can tell he's not really comfortable attributing these views to me, they way he qualifies it).  And what amazes me is that he can't see that he's doing exactly the thing he chastises the early AI community for doing: getting starry-eyed about a toy model that might—might—scale up and might not.  There are a few interesting ideas in his ramblings, but it's his job to clean them up and present them in  some sort of proper marching order, not ours. Until he does this, there's nothing to reply to.

Dan

DANIEL C. DENNETT is University Professor, Professor of Philosophy, and Director of the Center for Cognitive Studies at Tufts University. He is the author of Consciousness Explained; Darwin's Dangerous Idea; and Freedom Evolves.


Steve Grand

I admit I didn't understand the latter half of Jaron's paper, so I can't yet comment on it, but I'd like to respond to a few of Dylan's comments with a plea not to be quite so dismissive.

[Dylan writes] "...because any parallel machine can be simulated on a serial machine with only a negligible loss of efficiency. In other words, the findings of Turing and von Neumann apply to both serial and parallel machines, so it makes no difference which type you focus on."

It's true that in principle any parallel discrete time machine can be implemented on a serial machine, but I think Dylan's "negligible loss of efficiency" comment was waving rather an airy hand over something quite important. Serializing a parallel process requires a proportional increase in computation time, and sometimes such quantitative changes have qualitative consequences—after all, the essential difference between a movie and a slide show is merely quantitative, but because a neural threshold is crossed at around 24 frames/second there's also a fairly profound qualitative difference to us as observers. More importantly, this is why continuous time processes can't always be serialized, since they can lead to a Zeno's Paradox of infinite computation over infinitesimal time slices.

Speaking from a purely practical point of view, time matters. In my work I routinely model parallel systems consisting of a few hundred thousand neurons. I can model these in serial form, luckily, but it's only barely feasible to do so in real time, and I can't slow down gravity for the benefit of my robot. Moore's Law isn't going to help me much either. I'd far rather have access to a million tiny processors than one big one, and the compromises I have to make at the moment (specifically the artifacts that serialization introduces) can really cloud my perception of the kinds of spatial computation I'm trying, with such grotesque inefficiency, to simulate.

Which brings me to the question of whether it "makes no difference which type you focus on".

Turing's famous machine undoubtedly made us focus very heavily on "definite methods"—i.e. algorithms—and algorithms are not the only ways to solve problems. Turing himself realized this, which is perhaps why he did a little work on "unorganized machines" (something akin to neural networks). Many systems involving simultaneous interactions can be satisfactorily approximated in a serial computer, but it doesn't follow that this is the best way of thinking about them, or that solutions of this type might even occur to us while we're wearing serial, discrete time blinkers.

I agree with Jaron that the digital computer has so deeply ingrained itself in our consciousness that we find it hard to see that there are other ways to compute. I'd happily lay a Long Bet that Moore's Law becomes utterly irrelevant in the not-too-distant future, when we suddenly discover new ways to compute things that don't require a stepwise architecture, and I'd agree with Jaron that this new way is likely to be based on spatial patterns (although not pattern recognition).

Sound, incidentally, isn't entirely processed as a temporal stream. Brains can't avoid the fact that sound waves arrive serially, but since speech recognition requires so much contextual and out-of-sequence processing, I bet the brain does its utmost to convert this temporal stream into a spatial form, so that its elements can be overlapped, compared and integrated.

The very first thing that the cochlea does is convert sound frequency into a spatial representation, and this type of coding is retained in the auditory cortex. In fact everything in the cortex seems to be coded spatially. Some parts use very concrete coordinate frames, such as retinotopic or somatotopic coordinates, or shoulder-centred motion vectors, while other parts (such as the Temporal lobes) seem to employ more abstract coordinate spaces, such as the space of all fruit and vegetables.

My AI research leads me to suspect that some of the most crucial components of cortical computation rely on the mathematics of shapes and surfaces inside such coordinate frames—a kind of geometric computation, as opposed to a numerical, sequential one. Luckily for me, you can implement at least most of these spatial transformations using a serial computer, but I find I have to think very distinctly in two modes: as a programmer when creating the palette of neurons and neuromodulators, and then as a... what? a biologist? an artist? a geometer? ...when thinking about the neural computations. The former mindset doesn't work at all well in the latter environment. Connectionism gave us a very distorted view of the brain, as if it were a neat, discrete wiring diagram, when in reality it's more accurate to describe brain tissue as a kind of structured gel.

As Jaron points out, Gabor wavelets and Fourier transforms are (probably) commonplace in the brain. The orientation detectors of primary visual cortex are perhaps best described as Gabor filters, sensitive to both orientation and spatial frequency, even though conventional wisdom sees them as rather more discrete and tidy "edge detectors". The point spread function of nervous tissue is absolutely huge, so signals tend to smear out really quickly in the brain, and yet we manage to perceive objects smaller than the theoretical visual acuity of the retina, so some very distributed, very fuzzy, yet rather lossless computation seems to be going on.

We've only relatively recently "rediscovered" the power of such spatial and convolved forms of computation—ironically in digital signal processors. These are conventional von Neumann-style serial processors, but the kind of computation going on inside them is very much more overlapping and fuzzy, albeit usually one-dimensional. Incidentally, optical holograms can perform convolution, deconvolution and Fourier transforms, among other things, at the speed of light, acting on massively parallel data sets. It's true that we can do the same thing (somewhat more slowly) on a digital computer, but I have a strong feeling that these more distributed and spatial processes are best thought about in their own terms, and only later, if ever, translated into serial form. Such "holographic" processes may well be where the next paradigm shift in computation comes from.

Sometimes what you can see depends on how you look at it, and we shouldn't underestimate the power of a mere shift in viewpoint when it comes to making breakthroughs. Try recognizing an apple from the serial trace of an oscilloscope attached to a video camera that is pointed at an apple, and this fact becomes obvious.

I have to say I couldn't really find anything new in what Jaron says—if anything it seems to be harking back to pre-digital ideas, which is no bad thing—but I definitely don't think such concepts should be dismissed out of hand.

STEVE GRAND is an aritifical life researcher and creator of Lucy, a robot babay orangutan. He is the founder of Cyberlife Research and the author of Creation: Life and How to Make It.


Nicholas Humphrey

Human consciousness as an ontology overlaid on the world? No gross, or everyday objects, without it .. neither apples nor houses? "I went in that direction," Lanier says, "and became mystical about everyday objects."

The poet, Rilke, went the same way (Ninth Elegy, Duino Elegies, Leishman translation, 1922):

... all this
that's here, so fleeting, seems to require us and strangely
concerns us... Are we, perhaps, here just for saying: House,
Bridge, Fountain, Gate, Jug, Fruit tree, Window, —
possibly: Pillar, Tower?... but for saying, remember,
oh, for such saying as never the things themselves
hoped so intensely to be.

But, then, as another poet, W H Auden, said of poets: "The reason why it is so difficult for a poet not to tell lies is that in poetry all facts and all beliefs cease to be true or false and become interesting possibilities"

Best,

Nick

NICHOLAS HUMPHREY, School Professor at the London School of Economics is a theoretical psychologist and author of A History of the Mind, Leaps of Faith, and The Mind Made Flesh.


Clifford Pickover

Jaron Lanier certainly covers the gamut, from consciousness, to brains, to computers of the future. I would like to counter by asking the group a question that has been on my mind lately: Would you pay $2000 for a "Turbing"? Let me explain what I mean....

In 1950, Alan Turing proposed that if a computer could successfully mimic a human during an informal exchange of text messages, then, for most practical purposes, the computer might be considered intelligent. This soon became known as the "Turing test," and it since led to endless academic debate.

Opponents of Turing's behavioral criterion of intelligence argue that it is not relevant. This camp suggests that it is important that the computer demonstrates cognitive ability regardless of behavior. They say that computers can never have real thoughts or mental states of their own. The computers can merely simulate thought and intelligence. If such a machine passes the Turing Test, this only proves that it is good at simulating a thinking entity.

Holders of this position also sometimes suggest that only organic things can be conscious. If you believe that only flesh and blood can support consciousness, then it would be very difficult to create conscious machines. But to my way of thinking, there's no reason to exclude the possibility of non-organic sentient beings. If you could make a copy of your brain with the same structure but using different materials, the copy would think it was you.

I call these "humanlike" entities Turing-beings or "Turbings." If our thoughts and consciousness do not depend on the actual substances in our brains but rather on the structures, patterns, and relationships between parts, then Turbings could think. But even if they do not really think but rather act as if they are thinking, would you pay $2000 for a Turbing—a Rubik's-cube sized device that would converse with you in a way that was indistinguishable from a human? Why?

CLIFFORD PICKOVER is a research staff member at IBM's T. J. Watson Research Center, in Yorktown Heights, New York. His books include Time : A Traveler's Guide; Surfing Through Hyperspace; and Black Holes: A Traveler's Guide.


Marvin Minsky

I agree with both critics (Dylan Evans and Dan Dennett).

Papert and I once proved that, in general, parallel processes end up using more computational steps than do serial processes that perform the same computations. And that, in fact when some processes have to wait until certain other ones complete their jobs, the amount of computation will tend to be larger by a factor proportional to the amount of parallelism.

Of course, in cases in which almost all the subcomputations are more independent, the total time consumed can be much less (again in proportion to the amount of parallelism)—but the resources and energy consumed will still be larger. Of course, for most animals, speed is what counts; otherwise Dylan Evans is right, and Lanier's analysis seems in serious need of a better idea.

Here is the presumably out-of-print reference: Marvin Minsky and Seymour Papert, "On Some Associative, Parallel and Analog Computations, in Associative Information Techniques", in E.L. Jacks, ed., American Elsevier Publishing, Inc., 1971, pp. 27-47.

MARVIN MINSKY, mathematician and computer scientist at MIT, is a leader of the second generation of computer scientists and one of the fathers of AI. He is the author The Society of Mind.


Jaron Lanier

It's a great thing to face a tough technical crowd. So long as you don't let it get to you, it's the most efficient way to refine your ideas, find new collaborators, and gain the motivation to prove critics wrong.

In this instance, though, I think the critical response misfired.

To understand what I mean, readers can perform a simple exercise. Use a text search tool and apply it to my comments on "Gordian software." See if you can find an instance of the word "Parallel." You will find that the word does not appear.

That's odd, isn't it? You've just read some scathing criticisms about claims I'm said to have made about parallel computer architectures, and it might seem difficult to make those claims without using the word.

It's possible to imagine a non-technical reader confusing what I was calling "surfaces" with something else they might have read about, which is called parallel computation. Both have more than one dimension. But that's only a metaphorical similarity. Any technically educated reader would be hard-pressed to make that mistake.

For non-technical readers who want to know why they're different: "Surfaces" are about approximation. They simulate the sampling process by which digital systems interact with the physical world and apply that form of connection to the internal world of computer architecture. They are an alternative to what I called the "high wire act of perfect protocol adherence" that is used to make internal connections these days. Parallel architectures, at least as we know them, require the highest of high wire acts. In a parallel designs whole new classes of tiny errors with catastrophic consequences must be foreseen in order to be avoided. Surfaces use the technique of approximation in order to reduce the negative effects of small errors. Parallel architectures are not implied by the fuzzy approach to architecture my piece explored.

It didn't occur to me that anyone would confuse these two very different things, so I made no mention of parallel architectures at all.

The first respondent named Dylan Evans reacted as if I'd made claims about parallel architectures. It is possible that Evans is making the case that I'm inevitably or inadvertently talking about something that I don't think I'm talking about, but the most likely explanation is that a misunderstanding took place. Perhaps I was not clear enough, or perhaps he made assumptions about what I would say and that colored his reading. Dan Dennett then endorsed his remarks. There's probably a grain of legitimate criticism, at least in Dennett's mind, and perhaps someday I'll hear it.

Steve Grand then addressed some of the ideas about parallelism brought up by other respondents, but also pointed out that many of the ideas in my piece were not new, which is correct, and something that I made clear. What was new was not the techniques themselves but the notion of applying techniques that have recently worked well in robotics to binding in modular software architectures. I also hoped to write what I think is the first non-technical explanation of some of these techniques, like the wavelet transform.

At this point, it seemed the discussion was getting back on track. But then Marvin Minsky posted an endorsement of Dennett's endorsement of Evans. Marvin was an essential mentor to me when I younger and I simply had to ask him what was going on. I would like to quote his response:

"Oops. In fact, I failed to read the paper and only read the critics, etc. Just what I tell students never to do: first read the source to see whether or not the critics have (probably) missed the point."

There is a certain competitive, sometimes quite macho dynamic in technical discussions, especially when someone is saying something unfamiliar. I expect that and wouldn't participate in this forum if I was too delicate to take the heat. Once in a while, though, that dynamic gets the better of us and we're drawn off topic.

What I'd like to do at this point is add some background to my argument and refer to some other researchers addressing similar concerns in different ways, because I think this will help to frame what I'm doing and might help readers who are having trouble placing my thoughts in the context of other ideas.

Computer science is like rocket science in this sense: You don't know if the rocket works until you launch it. No matter how lovely and elegant it might be on the ground, you really want to test how it performs once launched. The analog to launching a rocket in computer science is letting a software idea you've seen work on a small scale grow to a large scale. As I pointed out, it's been relatively easy in the history of computer science to make impressive little programs, but hard to make useful large programs. Anyone with eyes to see will acknowledge that most of our lovely rockets are misfiring.

An essential historical document is the book, The Mythical Man Month by Fred Brooks. Brooks was a student of Ivan Sutherland's and wrote this book when the first intimations of the software scaling problem became clear.

A good introduction to the current mainstream response to what has unquestionably become a crisis is the Nov. 2003 issue of MIT's Technology Review magazine, which is themed on this topic. There you can read up on some of the most visible recent ideas on how to address the problem. It's natural to expect a range of proposals on how to respond to a crisis. The proposals reported in TR seem too conservative to me. They are for the most part saying something like, "This time we'll do what we did before but with more discipline and really, really paying attention to how we could screw up." My guess is that we've already seen how disciplined groups of people are capable of being when they make giant software and should accept that as a given rather than hoping it will change.

One doesn't have to hope that one idea will fix everything to search for radical new ideas that might help to some degree. A one-liner that captures the approach described in the "Gordian" piece is that I want to recast some techniques that are working for robots and apply them to the innards of software architectures. I'm not the only radical looking at the problem of scalability. A completely different approach, for instance, is taken by Cordell Green and others who are trying to scale up the idea of logic-based specification as a way to make error-free programs. Yet another batch of ideas can be found in the June issue of Scientific American; see the cover story, which actually does describe a way to apply parallel computation to this problem.

Whether radical or not, a wide range of approaches is called for because the problem is both long-standing and important.

This is implicit in Nicholas Humphrey's response to the second portion of the essay, which was about philosophy rather than software architecture. Just as it's natural for computer scientists to wonder what makes a mind, it's also natural to wonder what makes an object, in the ordinary sense of the word. This is our rediscovery of old questions in our new light.


George Dyson

The latest manifesto from Jaron Lanier raises important points. However, it is unfair to attribute to Alan Turing, Norbert Wiener, or John von Neumann (& perhaps Claude Shannon) the limitations of unforgiving protocols and Gordian codes. These pioneers were deeply interested in probabilistic architectures and the development of techniques similar to what Lanier calls phenotropic codes. The fact that one particular computational subspecies became so successful is our problem (if it's a problem) not theirs.

People designing or building computers (serial or parallel; flexible or inflexible; phenotropic or not) are going to keep talking about wires, whether in metaphor or in metal, for a long time to come. As Danny Hillis has explained: "memory locations are simply wires turned sideways in time." If there's a metaphor problem, it's a more subtle one, that we still tend to think that we're sending a coded message to another location, whereas what we're actually doing is replicating the code on the remote host.

In the 1950s it was difficult to imagine hardware ever becoming reliable enough to allow running megabyte strings of code. Von Neumann's "Reliable Organization of Unreliable Elements" (1951) assumed reliable code and unreliable switches, not, as it turned out, the other way around. But the result is really the same (and also applies to coding reliable organisms using unreliable nucleic acids, conveying reliable meaning using unreliable language, and the seemingly intractable problem of assigning large software projects to thousands of people at once).

Von Neumann fleshed out these ideas in a series of six lectures titled "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components" given at Cal Tech on January 4-15, 1952. This formed a comprehensive manifesto for a program similar to Lanier's, though the assumption was that the need for flexible, probabilistic logic would be introduced by the presence of sloppy hardware, not sloppy code. "The structures that I describe are reminiscent of some familiar patterns in the nervous system," he wrote to Warren Weaver on 29 January 1952.

The pioneers of digital computing did not see everything as digitally as some of their followers do today. "Besides," argued von Neumann in a long letter to Norbert Wiener, 29 November 1946 (discussing the human nervous system and a proposed program to attempt to emulate such a system one cell at a time), "the system is not even purely digital (i.e. neural): It is intimately connected to a very complex analogy; (i.e. humoral or hormonal) system, and almost every feedback loop goes through both sectors, if not through the 'outside' world (i.e. the world outside the epidermis or within the digestive system) as well." Von Neumann believed in the reality of cats and apples too.

Turing's universal machine, to prove a mathematical point, took an extreme, linear view of the computational universe, but this does not mean that higher-dimensional surfaces were ignored. Von Neumann, while orchestrating the physical realization of Turing's machine, thought more in terms of matrices (and cellular inhabitants thereof) than tapes. Remember that the original IAS computer (the archetype "von Neumann machine") consisted of a 32 x 32 x 40 matrix, with processing performed in parallel on the 40-bit side. In an outline for the manuscript of a general theory of automata left unfinished at von Neumann's death, Chapter 1 is labeled "Turing!" Chapter 2 is labeled "Not Turing!" Template-based addressing was a key element in von Neumann's overall plan.

In the computational universe of Turing, von Neumann, and Lanier (which we are all agreed corresponds to, but does not replace, the real world) there are two kinds of bits: bits that represent differences in space, and bits that represent differences in time. Computers (reduced to their essence by Turing and later Minsky) translate between these two kinds of bits, moving between structure (memory) and sequence (code) so freely that the distinction is nearly obscured. What troubles Jaron Lanier is that we have suddenly become very good at storing large structures in memory, but remain very poor at writing long sequences of code. This will change.

I'm not immersed in the world of modern software to the same extent as Jaron Lanier, so it may just be innocence that leads me to take a more optimistic view. If multi-megabyte codes always worked reliably, then I'd be worried that software evolution might stagnate and grind to a halt. Because they so often don't work (and fail, for practical purposes, unpredictably, and in the absence of hardware faults) I'm encouraged in my conviction that real evolution (not just within individual codes, but much more importantly, at the surfaces and interfaces between them) will continue to move ahead. The shift toward template-based addressing, with its built-in tolerance for ambiguity, is the start of the revolution we've been waiting for, I think. It all looks quite biomimetic to me.

GEORGE DYSON, who lives in Bellingham, Washington, is author of Baidarka, Darwin Among the Machines; and Project Orion.


Steven R. Quartz

I have considerable sympathy for Lanier's complaints, although I disagree with the how he's analyzed the situation. I do think he's right that there's something deeply — probably fundamentally — wrong with the current best model of software and computation. But, the problems aren't simply with the von Neumann architectures Lanier criticizes.

Most approaches to parallel computation are equally bad and would need to be solved by Lanier's alternative model. My own attempts to parallelize — note the not coincidental alliteration to "paralyze" — code for one of Cray's parallel supercomputers, the T3D, made it all too clear to me that parallel computation suffered from critical problems that have never been solved (does anyone remember C*?).

Nor does there seem to be much prospect in the near term that they will be solved. Roughly, the problem is, as the number of processors increases, the harder it is to allocate facets of the problem to the processors in an efficient manner. In practice, most processors in a massively parallel computer end up sitting idle waiting for others to finish their task. Beyond this load balancing problem, forget about trying to debug parallel code.

So, what's wrong?

First, I'd respond to Lanier's comments with a historical note. I think the idea that von Neumann and others were misled by technological metaphors gets things the wrong way around. It is clear from von Neumann's speculations in the First Draft on EDVAC that he was utilizing the then state of the art computational neurobiology — McColloch and Pitts’ (1943) results on Turing equivalence for computation in the brain — as grounds for the digital design of the electronic computer. In other words, it was theoretical work in neural computation that influenced the technology, not the other way around. While much has been made of the differences between synchronous serial computation and asynchronous neural computation, the really essential point of similarity is the nonlinearity of both neural processing and the switching elements Shannon explored, which laid the foundation for McColloch and Pitt's application of computational theory to the brain.

In fact, I'd suggest that the real limitation of contemporary computation is the incomplete understanding of nonlinear processing in the brain. We still lack the fundamentals of nonlinear processing in brains: we don't know how information is encoded, why neurotransmitter release is so low a probability event, how dendrites compute, whether local volumes of neural tissue compute via diffuse molecules such as nitric oxide, and a host of other fundamental issues. Taking a hint from von Neumann's own reliance on the theoretical neurobiology of the day, these are the fundamental issues that ought to inform an alternative computational theory.

I have my doubts that a better understanding of processing in the brain will lead to Lanier's surface-based model, as temporal codes are fundamental properties of neural computation. In addition, although Lanier dismisses "signals on wires" computation, the brain is mostly wires (axons), whose optimization in terms of their minimization is a likely key to how the brain processes information.

Finally, I missed where exactly consciousness comes into Lanier's discussion. Personally, I think consciousness is vastly overrated (not my own, of course, but it's role in a science of cognition) — no one has really come up with any argument for what difference it makes and the overwhelming majority of information processing in the brain is subconscious.

There's a lot of work to be done getting a foothold into subconscious information processing before consciousness becomes an issue, and it only will when someone comes up with a solid argument for why it makes a difference. So far, no one has made that argument, which lends support to the possibility that consciousness is epiphenomenal and will never play a role in theorizing about cognition and behavior.

STEVEN R. QUARTZ is Director of the Social Cognitive Neuroscience Laboratory at Caltech and co-author (with Terrence Sejnowski) of Liars, Lovers, and Heroes: What the New Brain Science Reveals About How We Become Who We Are.


Lee Smolin

Reading the critics of Jaron Lanier's essay, in which he speculates about a new form of a computer, based on different principles than those that underlie the standard programmable digital computer, I wonder how people might have reacted, shortly after the invention of the wheel, if some ancestor of Jaron had proposed to invent a new form of transportation that was not a wheel. "Not a wheel!" one can hear them snorting. "Why everyone knows that any device to convey goods must depend on some arrangement of wheels. Not only that, the great thinker van N proved that any arrangement of wheels, whether in parallel or in serial, is equivalent to a single larger wheel, in terms of its ability to move goods."

"No," said the clearly frustrated proto-Jaron, "What I have in mind does involve lashing some logs together, but instead of rolling them, my idea is to put them into the river and simply put the goods on top and float them down to the next camp. So no wheels, and no need to abide by the great van N's theorem on wheel capacity."

The answer then must have been, "Well, we've never heard of such a thing, but try it and see if it works." It seems to me that that's what Jaron's critics might be saying to him, instead of arguing that a boat, as a form of transportation, must roll on wheels.

So it seems to me the question being debated can be framed like this: Is a computer something like a wheel? Is there really only one kind of computer, just like there is really only one kind of wheel? One can arrange them in many ways, in series and in parallel, but in the end, once the wheel or the computer has been invented, they will all work the same way. Even millennia later, wheels are wheels, period. Or, is the computer something more general, like a mode of transportation or a musical instrument. There are many different kinds of musical instruments, which produce sound by means of many different principles. Is it possible that there are actually many different kinds of computers, which will accomplish informational tasks for us by as many different principles as musical instruments produce sounds? In that case, is the problem that the critics are beating their drums, while Jaron is trying to blow the first horn?

LEE SMOLIN, a theoretical physicist, is a founding member and research physicist at the Perimeter Institute in Waterloo Canada. He is the author of The Life of The Cosmos and Three Roads to Quantum Gravity.


Charles Simonyi

I am very happy to see a lot of interesting comments in response to Jaron Lanier's paper. My complaint is with the vast range of Jaron's concerns from the practical software engineering of Fred Brooks, to the issues of consciousness. Maybe his point is that looking far enough one can also solve the more immediate practical problems.

My focus is closer to Fred Brooks’ than to Daniel Dennett's and from that perspective I could comment on the MIT Technology Magazine issue on "Extreme Programming" which featured, among others, the technology that my company, Intentional Software Corporation has been promoting. In his reply to the comments, Jaron referred to the ideas presented in the magazine as "mainstream" and "conservative". I wish that were the case—at least for intentional software. But let me illustrate just how radical the intentional idea is by describing how it applies to the Gordian Software problem.

I am amazed how many software discussions center on essentially implementation questions, while no one seems care much about what the Problem to be solved really is. The implicit assumption is that the Problem will be first described only by some mathematical language—assembly, Cobol, Java, graphical programming, design patterns, or even logic-based specifications. This is as if the Problem had not existed before a software implementation. What did people do before, one might ask?

The obvious fact is that before computerization, people used to use their consciousness and intelligence to represent (and maybe even solve after a fashion) the Problem. For example, instead of using computer software, architects or accountants used to make drawings or balance the ledgers "by hand" that is by using their intelligence. So the two demonstrated representations for problems are: human intelligence, or an effectively machine executable software implementation.

Gordian software is a child of this false dichotomy where there is no machine-accessible representation of the problem other than the implementation. For the implementation is manifestly not the Problem, it is complex interweaving of the Problem with information technology: the scale, the platforms, the languages, the standards, the algorithms, the security and privacy concerns, and so on. This interweaving creates a horrible explosion of the size of the description because it includes not just all of the problem and all of the technological principles at play, but every and all instances where the two may interact. So the size of the description is proportional to the size of a product space, not the sum of two problem spaces. This is manifestly expensive but also very destructive to any desired human or mechanical processing of the description—to put it bluntly, programmers act as steganographers, in effect encrypting or making inaccessible the useful information by embedding it in massive amounts of implementation detail.

The radical idea of Intentional Software is to focus attention on the Problem owners—let's call them Subject Matter Experts—and on the interface between them and the programmers who are the implementation experts. We will assist the SME's to express their problem in their notation, in their terms. The result will be "intentional" in that it will represent what they intend to accomplish, even though it will "lack" or rather it will be free of the semantic details that are key to any implementation. We will then ask the programmers to write a generator/transformer from the intentional description to a traditional implementation will all the desired properties—speed, compatibility, standards, and so on. So the Problem will be represented as one factor, and it can be made effective by the application of the generator, the second factor, that represents the implementation aspects of the solution.

The amount of new technology that is required is modest: basically we need a special editor—a sort of super Power Point—that assists the SME's to record and maintain their intentions and also the meta-data—the schemas—about their notations and terms.

The difference in the approach from the programmer's point of view is almost superficial. In the absurd—but not unprecedented–limiting case, where the SME's contribute just the product name, the programmers simply have to embed their contribution—prepared as before—into a simple "generator" framework parameterized by the product name string intention. Nothing is gained by that, and it is a historical curiosity that some problems were solved just by programmers. But we can see how additional useful contributions from the SME's could then successively introduce more variablility into the output of the generator, and create a more effective balance between the amount of contributions from the SME's and from the programmers while maintaining the key invariants:

1. The intentions remain free of implementation semantics—that means SME's do not have to learn programming. Furthermore the intentional description is "compact"—it is as large and complex as the Problem itself, and not combinatorially larger. The compactness in turn promotes the SMEs’ ability to interact with it, to perfect it.

2. Changes made by an SME to the intentional description can result in a new artifact at machine speeds and at essentially machine precision—by the application of the generator and without the participation of a programmer.

3. Changes to generator by the programmer can change aspects of the implementation at the cost that is measured in implementation space and not in problem space or the combinatorial product space of the two as it is the case with the current technique.

It is not difficult to see how other key issues of software engineering would also become more tractable if such factoring could be employed—maintenance, bugs, aspects, reuse, programmer training, or "user programming", could be all re-interpreted in their simpler and purer environments.

It is harder to see how this factoring can be enabled and facilitated by tools, services, or training, and what new problems that are unique to intentional programming might emerge. The good new is that there is more and more attention paid both to the software engineering problem and also to the intentional and other generative schemes as possible solutions. It is also encouraging that in specific areas these ideas have been flourishing for quite a while. Most game programs, just to mention one area, are created using multiple levels of domain-specific encodings and mechanical program generation.

As an aside I note for the Edge audience that DNA is an intentional program—it lacks implementation detail and it is given implementation detail only by the well-known generators, which range from the ribosome through the phenotype to the whole ecosystem. So the DNA does not concern itself with how the organism works, it rather describes how the organism should be built or, really, what the "problem" really is. Because DNA is intentional, its length is short relative to its result—indeed the length of human genome belies its cosmic importance by being shorter than the source codes of many human software artifacts of more modest accomplishments.

Another key feature of the encoding is that it is "easy" to change, that is an important fraction of possible changes are also meaningful changes; this made evolution possible—or rather this is a feature of evolved things. Had the code included implementation detail—that is if it had been more like a "blueprint" as in the popular metaphor or if it had been more like a software program—then it could not have evolved naturally and people hoping for some sign of an intelligent designer would have had their smoking amino acid.

CHARLES SIMONYI, cofounder of Intentional Software Corporation, formerly worked as Director of Application Development and Chief Architect at Microsoft Corporation where in 1981 he started the development of microcomputer application programs and hired and managed teams who developed Microsoft Excel, Multiplan, Word, and other applications.


Back to WHY GORDIAN SOFTWARE HAS CONVINCED ME TO BELIEVE IN THE REALITY OF CATS AND APPLES: A Talk with Jaron Lanier
|Top|