THE EDGE OF COMPUTATION

BIOCOMPUTATION
A Conversation with J. Craig Venter, Ray Kurzweil, Rodney Brooks
[6.29.05]

 
 

Introduction

One aspect of our culture that is no longer open to question is that the most significant developments in the sciences today (i.e. those that affect the lives of everybody on the planet) are about, informed by, or implemented through advances in software and computation. In no other field is this as evident as in the biology and, in this regard, each of the panelists in this Edge conversation exemplifies this new trend.

For examples, just as this edition of Edge goes to "press", today's The Wall Street Journal ran a front page story on Craig Venter's goal of creating life itself. Venter is one of leading scientists of the 21st century for his visionary contributions in genomic research. He is advancing the science of genomics and in applying genomic advances to some of the world’s most vexing public health and environmental challenges. Major research foci include human genomic medicine, environmental and evolutionary genomics (which includes the Venter Institute Global Sampling Mission), biological energy production, synthetic biology, and the intersection between genomics and environmental and energy policy.



ROCKVILLE, Md. -- Biologist J. Craig Venter once raced the U.S. government to complete the decoding of the human genome. Now, after a maverick career studying the code of life, Dr. Venter has a new goal: life itself.

Along with two veteran collaborators, Dr. Venter hopes to become the first to whip up a made-to-order bacterium. Normally, new life is created via reproduction, with each generation passing its genes on to the next. But Dr. Venter aims to bypass that process by manufacturing a complete set of genes, or genome, of a single-cell bacterium in his laboratory. This man-made genome would be installed inside a bacterium whose own genes have been removed.

By creating such a life form, Dr. Venter's researchers think they may come closer to understanding what life is and how scientists can manipulate it for the benefit of humankind. New artificial species could open avenues for industrial production of drugs, chemicals or clean energy.

"This is the step we have all been talking about. We're moving from reading the genetic code to writing it," Dr. Venter says, swiveling in his chair at his sprawling scientific headquarters here.

(Antonio Regaldo, "Next Dream for Venter: Create Entire Set of Genes From Scratch", The Wall Street Journal, June 29, 2005; Page A1)

Rod Brooks' midlife research crisis has been to move away from looking at humanoid robots and toward looking at the very simple question of what makes something alive — what the organizing principles are that go on inside living systems. In his lab at MIT, his is trying to build robots that have properties of living systems that robots haven't had before.

Brooks is puzzled that "we've got all these biological metaphors that we're playing around with — artificial immunology systems, building robots that appear lifelike — but none of them come close to real biological systems in robustness and in performance. They look a little like it, but they're not really like biological systems." He worries that in looking at biological systems we are missing something that is already there — that has always been there. To Brooks, this might be called "the essence of life," but he is talking about a biochemical phenomenon, not a metaphysical one. Brooks is searching for a new conceptual framework that, like computation, does not involve any new physics or chemistry — a framework that gives us a different way of thinking about the stuff that's there. "We see the biological systems, we see how they operate," he says, "but we don't have the right explanatory modes to explain what's going on and therefore we can't reproduce all these sorts of biological processes. That to me right now is the deep question."

Ray Kurzweil believes "we are entering a new era. Some of us call it the Singularity. It's a merger between human intelligence and machine intelligence which is going to create something bigger than itself. It's the cutting edge of evolution on our planet. One can make a strong case that it's actually the cutting edge of the evolution of intelligence in general, because there's no indication that it has occurred anywhere else. To me that is what human civilization is all about. It is part of our destiny, and part of the destiny of evolution, to continue to progress ever faster and to grow the power of intelligence exponentially."

In this Edge Reality Club conversation, three of the world's leading scientists ask each other the questions they are asking themselves about biocomputation.

Take research and experimentation down an empirical road road and you come to a an wall where everything changes, and you blow all your epistemological biases and need new language, new ideas, new paradigms. This is the intersection of the empirical and the epistemological...where Edge likes to hang out.

This live Edge event was presented on February 23rd, hosted by the TED Conference (Technology, Entertainment, Design) in Monterey California. [ED NOTE: TED Global takes place in Oxford, England July 12-15. Craig Venter is among many Edge regulars who are speaking).

I am pleased to present J. Craig Venter, Ray Kurzweil, and Rodney Brooks on "Biocomputation".

JB



In 1998, J. CRAIG VENTER became the first president of Celera Genomics to sequence the human genome using the whole genome shotgun technique, new mathematical algorithms, and new automated DNA sequencing machines. The completed sequence of the human genome was published in February 2001 in the journal, Science. In addition to the human genome, Venter and his team at Celera sequenced the fruit fly, mouse, and rat genomes. In 2003, Venter launched a global expedition to obtain and study microbes from environments ranging from the world’s oceans to urban centers. This mission, now in progress, is yielding insights into genes that make up the vast realm of microbial life.
He is founder and president of the J. Craig Venter Institute and the J. Craig Venter Science Foundation.


Craig Venter Edge Video Broadband | Modem


RAY KURZWEIL, an inventor and entrepreneur, has been pushing the technological envelope for years in his field of pattern recognition. He was the principal developer of the first omni-font optical character recognition system, the first print-to-speech reading machine for the blind, the first CCD flat-bed scanner, the first text-to speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large vocabulary speech recognition system. He is the author of The Age of Intelligent Machines; The Age of Spiritual Machines, When Computers Exceed Human Intelligence; (with Terry Grossman, M.D.) Fantastic voyage: Live Long Enough to Live Forever; and the upcoming book, The Singularity is Near, When Humans Transcend Biology.


Ray Kurzweil Edge Video Broadband | Modem


RODNEY BROOKS, a computer scientist and AI researcher, is interested in making living systems.

Brooks is Director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and Fujitsu Professor of Computer Science. He is also co-founder and Chief Technical Officer of iRobot Corporation, which has brought you the Roomba vacuum cleaner, the Scooba robotic floor washer and the robots that disarm the Improvised Explosive Devices (IEDs) in Iraq. His most recent book was Flesh and Machines: How Robots Will Change Us.


Rodney Brooks Edge Video Broadband | Modem


BIOCOMPUTATION:
Rodney Brooks, Ray Kurzweil, J. Craig Venter

J. CRAIG VENTER

With such a broad topic I had no idea where to start, so I decided to start with the whole planet's genome. John told me I had to present something modest here. So, as many of you know, the human genome was sequenced in the year 2000 and that gave us the complete repertoire of human genes. And slowly, as we're adding more and more gene space to that — and we've gone up two orders of magnitude now just in the last year from new genes in the environment, and I'll be talking about that more on Thursday — we're starting to look at the world in terms of gene space instead of genomes and species, and this gets us down to component analysis.

This is affecting three different major areas that we're working on at the Venter Institute — cancer is the one I'd like to address the most today, and I'll be addressing the others on Thursday, but I'll mention them as well. Cancer is getting broken down into more and more separate diseases as we're able to subdivide the diagnostics and the genes associated with them, but we're starting to take a different view of viewing cancer as an overall disease, and we're looking at gene space where we think it can be targeted to deal with cancer as a whole. A lot of cancer's been looked at genetically, and while there are genetic predispositions to cancer that we all have it's actually somatic mutations that we get from toxins in the environment, from radiation, etc., that usually lead to cancer.The model is that as we accumulate mutations in more and more genes, we cross a threshold where all of a sudden we have unregulated cell growth.

It turns out there's a set of about 518 genes, kinase receptors, that are really responsible for controlling cell growth. What we found is, obviously we could identify all these in the genome now. We could look for mutations in individuals with cancer, whether they inherited those mutations, but now that we can do high throughput resequencing of genes, we're now sequencing these genes looking for somatic mutations, things that have occurred after the genome was established. And in every type of cancer we're finding mutations in these genes. Usually these mutations lead to unregulated cell growth. It turns on the kinase receptors and they just run continuously.

There have been some remarkable breakthroughs in the last few years with just a couple of drugs that in fact block these receptors and interfere with the growth of cancer. Herceptin out of Genentech is a therapeutic antibody that affects one of these receptors. Probably the most important breakthrough has been Gleevac, which works by blocking the rapid growth of white blood cells, and has led to almost miracle cures of cancer. There's another drug that Novartis sells that initially in trials didn't look very good, until some of the receptors were sequenced in the cancer patients, and it turns out ten percent of patients that they looked at had these mutations in this receptor, and the drug worked on virtually a hundred percent of those individuals.

By understanding this gene space, understanding mutations that we've collected during our lifetimes, we may be able to have a set of molecules that work universally against, if not all, certainly most cancers. And there are a few other groups of molecules that seem to fit into these categories. So we have a major new program; we recruited Bob Strausberg of the National Cancer Institute, in collaboration with the Ludwig Institute for Cancer Research and with Bert Vogelstein at Johns Hopkins, trying to look at as many different cancers as we can, looking at somatic mutations in these. Once we collect a big enough set of these it's very easy just to design a gene chip to turn this into a few dollar assay for people instead of expensive gene sequencing. So we're trying to use the bioinformatics to predict other gene sets that look like they're in the same category, to see if we could basically have on the shelf a repertoire of small molecules and antibodies that would work against most types of cancer. The excitement on that front is pretty stunning. Most of these things just represent a change in philosophy.

We're taking the same approach to antivirals and antibiotics (with all the worry about bioterrorism we have very few antivirals)and beginning to look at common mechanisms and infection, we have choke points that we think can block these infections regardless of whether it's an e-bola virus or a SARS virus. There are a lot of groups now effectively working on this. Understanding gene space and what it starts to mean in a post-genomic era is giving us a lot of new insight.

We're adding, as I said, exponentially to the number of genes by just doing shotgun sequencing of the environment. There were 188,000 well-characterized genes in the protein data bases; we're up to close to 8 million new ones just from doing random shotgun sequencing from the oceans. And we took all this combined data set and then we tried to see how many different gene families we have on the planet, really, trying to get down to our basic component sets. The number right now is somewhere between 40 and 50 thousand unique gene families, covering all the species that we know about. But every time we take a new sample from the environment and sequence it, we keep adding to those gene families in a linear fashion, seeing that we know very little of the overall biology of our planet. But each of these new genes — and we have literally several million genes of unknown function — some of these we have families over a thousand related proteins or genes in the same family. They're obviously important to biology, they're important to evolution, and beginning to sort out, using bioinformatic tools, what they may do, is giving us a lot of new tools in different areas.

The third and last area we're using is in synthetic biology. Trying to understand the basic components of a cell, we've tried knocking out genes, and trying to see what gene cells could live without, but we get different answers every time the experiment's done, depending on how it's done, whether it's a batch growth, or you require cloning out of the cells, different growth requirements. We decided some time ago the only way to approach this was to build an artificial chromosome and be able to do evolution in the laboratory the way it happens in the environment.

We're building a hundred cassettes of five or more genes, where we can substitute these cassettes, build an artificial chromosome, and try and create artificial species with these unique sets. But now with 8 million genes, and as this work continues, it's conceivable within a year or two having data bases of 30, 40, 100 million genes. Biology is starting to approach the threshold that the electronics industry passed where all of a sudden people had all the components and could start building virtually anything they wanted, using these different components. We have a problem, we don't understand all the biology at first principle levels yet, but we're getting the tools, we're getting the components where we can artificially build these, and we think we can, in the computer, design a species, design what biological functions we want it to have, and add this to the existing skeleton.

Understanding the gene components, working forward from those, we're applying this to energy production. We've tried to change photosynthesis by taking oxygen insensitive hydrogenases, and we're converting all the electrons direct from sunlight into hydrogen production. We're doing this with a molecular switch, so you can throw the switch and hydrogen bubbles off and you turn the switch off chemically and it stops the production. We're also trying to come up with new ways for fermentation from wood. So we're approaching things on a broad level, looking at genes as the fundamental components of biology and the future of industry.

On a more specific level, bird flu is currently in the news again. It's a good thing to avoid. In fact we're working with the prime minister of Thailand, we're working with Hong Kong, trying to use these same sequencing tools to track these new infections. The trouble is, the flu virus, when you get, for example, two or more isolates in a pig, can recombine to form an essentially infinite number of new viral particles. And those transfer at a frequency into humans that we don't like. And with the constant development of new viruses in birds that transfer these to animals, it's a matter of tracking these before we have a new pandemic.

People saw what happened with SARS: with air travel, you get an outbreak in China, the next thing it's happening in Toronto. By tracking the sequence space in birds around the world, we're trying to develop programs where we can catch things at an early enough stage with early detection programs, so that sequencing the gene space, even though we don't understand why 1914 pandemic virus was so virulent — by understanding the components that are out there and tracking how they recombined — we hope to be able to avoid a new pandemic.



RAY KURZWEIL

Let me try to build on what Craig has said. We just heard some very exciting applications which are in the early stage, moving on from the general project where we essentially collected the machine language of biology and we're now trying to disassemble and reverse engineer it. And I come to this from a couple of perspectives. I've actually had these two disparate interests in my life. One has been computer science. Both Rodney and I have worked in the AI field. And then I've had an interest in health. It started with the premature death of my father when I was 22. I was diagnosed with diabetes in my mid-30s, conventional treatment made that worse, I came up with my own program, I've had no indication of diabetes since. I wrote a health book about that in 1993.

Thus, I've had an interest in health. My most recent book is about health, where we talk about three bridges to being able to radically extend our longevity. Bridge one is what we can do today — we can actually do a lot more than people realize — in slowing down degenerative disease processes, and to some extent aging, that can keep even baby boomers, like my co-author and myself, and this panel, in good shape, until we have the full flowering of this biotechnology revolution, which is the intersection of my main interest, Rodney's main interest, which is information technology, and biology.

We call that the second bridge; that can then extend our longevity until the third bridge — the full flowering of the nanotechnology revolution — where we can go beyond the limitations of biology, because even though biology is remarkable in many ways, remarkably intricate, it's also profoundly limited. Our interneuronal connections in our brain, for example, process information at chemical signaling speeds of a few hundred feet per second, compared to a billion feet per second for electronics — electronics is a million times faster.

There's a robotic design for red blood cells by Rob Freitas which he calls respirocytes. A conservative analysis of that indicates that if you replace ten percent of your red blood cells with these devices you could do an Olympic sprint for 15 minutes without taking a breath, or sit at the bottom of your pool for four hours. Our biological systems are very clever, but they're very suboptimal. I've actually watched my own white blood cell under a microscope, and it had intelligence. It was able to notice a pathogen, I was watching this onslide, it cleverly blocked its exit, cornered it, surrounded it, and destroyed it, but it didn't do it that quickly, it took an hour and a half. It was a very boring thing to watch. There are proposals for circa 2020s technology of little robots the size of blood cells that they could do that hundreds of times faster. These may sound like very futuristic scenarios, but I would point out that there are four major conferences already on bioMEMS — on little devices that are blood cell size — that are already performing therapeutic and diagnostic functions in animals. For example, one scientist has cured type 1 diabetes in rats with a nano-engineered device with 7 nanometer pores.

In terms of my health interests, this biotechnology revolution is the second bridge. And we're in the early stages of it now, but 10-15 years from now many of these technologies which Craig mentioned — a few of the many examples that are now in process — will be mature.

My other interest is information technology. I am an inventor, and I realized that for inventions to succeed, they have to make sense for the world when you finish the project — and most inventions fail because the enabling technologies are not in place. Thus, I became an ardent student of technology trends and began to develop mathematical models of how technology evolves. The key message here is that information technology in particular progresses exponentially.

Craig certainly has experienced this; the genome project was controversial when it was first announced. Critics said, how are you going to get this project done in 15 years? At the rate at which we can sequence the genome and with the tools we have, it's going to take far longer. Two-thirds of the way through the project, the skeptics were still going strong, because not that much of the project had been finished. I'll show you charts on Saturday of the exponential reduction in the cost in sequencing DNA over that period, and the exponential growth in the amount of DNA that's being sequenced. It took us 15 years to sequence HIV, we sequenced SARS in 31 days. There's been smooth exponential growth of that process. People are familiar with Moore's Law and some say it's a self-fulfilling prophesy, but in fact it's a fundamental attribute of any information technology. We create more powerful tools, those tools then are used for the next stage. Very often scientists don't take into consideration the fact that they're not going to have to solve a problem for the next ten years with the same set of tools, the tools are continually getting more powerful.

The other major observation is that information technology is increasingly encompassing everything of value, from biology to music, to understanding how the brain works. We have information in our brains and even though some of it's analog rather than all digital, we can model it mathematically — and if we can model it mathematically, we can simulate it. We're further along in reverse engineering our brains that people realize. The theme of this panel is the intersection of information technology and biology, and that's a new phenomenon.

Craig was the leader of the private effort to sequence the genome that really launched this biotechnology revolution. We are in the early stages of understanding how biology works. Most of it we don't understand yet, but what we understand already is very powerful, and we're getting the tools to actually manipulate these information processes. Almost all drugs on the market today were created through what's called drug discovery, which is where pharmaceutical companies would methodically go through tens of thousands or hundreds of thousands of substances and find something that seems to have some benefit.

Oh, here's something that lowers blood pressure — we have no idea how or why it works, but it does lower blood pressure — and then we discover it has significant side effects. Most drugs were done that way. Kind of like the way primitive man or woman would find tools. Oh, here; this stone would make a good hammer. We didn't have the means of shaping tools. We're now gaining those means because we're understanding how these diseases progress. In the case of heart disease, we already have a pretty mature understanding of the sequence of specific biochemical steps and information steps that lead to that disease. Then we can do rational drug design and actually design drugs that intervene very precisely at certain steps in that process.

Craig gave a number of good examples in the area of cancer. In heart disease, for example, Pfizer has, for example, Torcetrapib, which inhibits one enzyme — the enzyme that destroys HDL in the blood — so the phase II trial showed that if you take this enzyme inhibitor, HDL levels — the good cholesterol — soar, and it slows down atherosclerosis. They're spending a record one billion dollars on their phase III FDA trials.

I wouldn't hang my hat on any specific methodology like that, but there are literally thousands of developments like this in the pipeline, where we can rationally intervene, with increasingly powerful tools, to change the progression of information processes that lead to disease, and aging processes. The tools include enzyme inhibitors: if we can find that there's an enzyme that's critical to a process, we can block it. We also have a new tool called RNA interference, where we send little fragments of RNA into the cell (they don't have to go into the nucleus, which is hard to do) and they basically latch onto the message RNA representing a gene and destroy it, and it inhibits gene expression much better than the older anti-sense technology. That's very powerful, because most diseases use gene expression someplace in their life cycle, so if we can inhibit a gene we can circumvent undesirable processes.

I'll give you just one example: the fat insulin receptor gene basically says, hold on to every calorie, because the next hunting season may not work out so well, which points out that our genes evolved tens of thousands of years ago, when circumstances were very different. For one thing, it wasn't in the interest of the species for people to live much past childbearing age, and people were grandmothers by the age of 30. Life expectancy was 37 in 1800. So we've already begun to intervene, but we now have much more powerful tools to do that.

When scientists at Joslin Diabetes Center inhibited the fat insulin receptor gene in mice, these mice ate ravenously and remained slim, got the health benefits of being slim — they didn't get diabetes, they didn't get heart disease, they lived 20 percent longer. They got the benefits of caloric restriction without the restriction. Some pharmaceutical companies have noticed that that might be an interesting drug to bring to the human market. There are some challenges because you don't want to inhibit the fat insulin receptor gene in muscle tissue, only in the fat cells; there are some strategies for doing that.

But it's an example of the power of being able to inhibit genes, which is another tool to basically re-engineer these information processes. The way we easily do with a Roomba vacuum cleaner, we can just change the software, but we're actually gaining the means to do that now in our biology.

More powerfully, we'll be able to actually add new genes. Up until recently, gene therapy has had challenges in terms of getting the genetic material into the nucleus, and then also getting it in the right place. There are some very interesting new strategies. One is to collect adult stem cells from the blood, then in the petri dish insert the new genetic material, discard the ones that don't get inserted in the right place, when you get one that looks good, you replicate it, and then reinsert it into the blood stream of the patient. A project by United Therapeutics actually succeeded in curing pulmonary hypertension in animals, which is a fatal disease, and that is now going into human trials. There are a number of other promising new methodologies for gene therapy.

We ultimately will have not just designer babies, but designer baby-boomers. And there are many other tools to intervene in these information processes and reprogram them. There are actually no inexorable limits to biology. People talk about the telomeres — and say this means you can't live beyond 120. But all these things can be overcome through engineering. Just in the last few years we've discovered that there's this one enzyme, telomerase, that controls the telomeres. These are complex projects. Somebody here's bound to say that we know very little about our biology. That's true. We had sequenced very little of the genome early in Craig Venter's project. But the progress is going to be exponential, and the tools are going to be exponentially more powerful.

What we know already is providing a great deal of promise that we can overcome these major killers like cancer, heart disease, Type 2 diabetes and stroke. We're also beginning to understand the processes underlying aging. It's not just one process, there are many different things going on. But we can intervene to some extent already and that ability will grow exponentially in the years ahead.


RODNEY BROOKS

John asked us to talk about the intersection of biology and information science, and I'm going to try and stick with that a little bit, and start off with some conventional stuff. The things that Craig has done, and others, on sequencing genomes have really relied on algorithms that have been developed in the information sciences. And the work in genomics proteomics et cetera that's going on now uses a lot of machine learning techniques — statistical machine learning techniques. Developed largely out of work of theoretical computer scientists and then applied by clever people like Craig into biology.

There's a crossover between information science enabling the sorts of things that Craig and others do. Interestingly, to me as a lab director across a broad range of computer science, the impact of theoretical computer science is profound, but it's the hardest thing to get funded from external agencies. Because people who work in networking, or compilers, or chip design, or whatever, they'll write a proposal that says, in the first three months we're going to do this task, in the next three months we're going to do that task, et cetera, and the funding agencies like that because they see what they're going to get ahead of time. But the theoreticians are going to think about stuff. Maybe they'll prove some theorems and maybe they won't, but they don't say oh, in first three months we're going to prove these three theorems, then in the next few months we'll prove these others — they would have already done the work. It's very hard to get funding for theoretical computer science because it doesn't fit that model of turning a crank and getting things out. But it has these enormous impacts on biology.

What's happening now, though — and Craig mentioned some of this with synthetic biology — is we're starting to move from just analysis of systems into engineering systems. I want to say a few words about engineering in general, and then about what's happening in biological engineering and how it's going to change completely from what people are thinking about right now.

First on engineering.

Engineering today is really applied computer science, in my view. Maybe that's a little biased, but essentially there are two things going on in engineering. First, you analyze stuff — and these days that's all about application of computation, getting the right computation systems together to do the analysis — and second, engineering is also creativity; designing things — and these days it's designing the flow of information about how the pieces come together. From one point of view, all of engineering these days is about applied computer science. In that sense, as we go to biological engineering, it's just more applied computer science, applied to biology. But roughly around 1905 in electrical engineering is where we are today in 2005 in bioengineering. Electrical engineering in 1905 looks very different from how electrical engineering looks today. In the next hundred years, what is now bioengineering is going to change dramatically. Ray, of course, is going to exponentially speed it up, but biology's actually more complex than physics, so there's going to be a little bit of balance there and it may not go as fast as Ray assumes.

Let me go back to electrical engineering in 1905. Electrical engineering was just then split off from physics. It was applied physics, and at MIT it was actually in 1904 that the physics department got together and had a faculty meeting where they expelled "the electricals." That's the phrase they used in their minutes. There were these dirty "electricals" that were cluttering up their nice physics department. And that was the foundation of the electrical engineering department at MIT.

Today we're seeing the same sort of thing as bioengineering departments form. There was biology — applied biology , and now it's bioengineering, and that's happening in the engineering schools rather than in the science schools within the universities. But electrical engineering back in 1905 was really a craft sort of thing; there wasn't the basic understanding that came about 50 years later when electrical engineering became science-based in the 1950s, and it changed the flavor of electrical engineering, and then in the last 50 years it's then become information- and computer science-based.

Engineering transformed into applied computer science. Right now, bioengineering is starting to do a few interesting things, but it's only just a shimmer of what's going to happen in the future. Craig mentioned hacking away at these micoplasmas, hacking up genes, trying to get a minimal genome, and then getting pieces together and forming them into a synthetic biological creature. There are other groups that are going back even to pre-genetic type of approaches.

A group out of Los Alamos, funded through the European Union, is trying to build an artificial cell which doesn't necessarily rely on DNA; they're playing around with RNA and more primitive components. There's some sort of root engineering of trying to figure out how these biomolecules can fit together and do stuff that they don't do in the wild. At our lab, in conjunction with a few other places, we've been working on something called biobricks, which are standard components inspired ­if you go to the Web site, parts.mit.edu, it's about biological parts. You see a 7400 series manual — people remember the 7400 series chips, which was what enabled the digital revolution where you could put standard components together, so they've got a hacked-up version of the TTL handbook cover, there — and it's biological parts — they're genes, a few hundred of them now, they've got part numbers, serial numbers, you click on the different sorts of part classes, you see a bunch of different instances of those parts, you see how they interact with each other.

We're running courses now where in January we have an independent activities period between the two semesters, and we get freshmen in, and they start clicking around with these parts, and they build a piece of genome which then gets spliced into an e-coli genome — e-coli is the chassis that you build your stuff in, because it maintains itself and reproduces. And freshmen after two or three weeks are able to build engineered e-coli, which do things. Maybe they're an oscillator and they plug in a luminescence gene and they flash, very slowly, so you can build digital computational elements inside these e-coli, and it's not to replace computation in silicon, because the switching time on these things is about ten minutes, using the digital abstractions — maybe using the digital abstractions isn't the right thing ultimately.

Or other sorts of projects that people have built — had groups of — a sheet of e-coli sitting out there, and one of them will switch on and say, hey, guys, and start pushing out some lactone molecules, and the guys around that will sense that and then start clustering around the guy who said, come to me. It's the start of engineering living cells to do stuff they wouldn't do normally. It's going to change, ultimately, over the next 50 years, the basis of our industrial infrastructure.

If you go back 50 years, our industrial infrastructure was coal and steel. And in the last 50 years it's been transformed into an information industrial infrastructure. This engineering, at the molecular level, at the genetic level, of cells, is going to change the way we do production of a lot of stuff over the next 50 years. Right now you grow a tree, you cut it down, and build a table. 50 years from now we should just grow the table. That's just a matter of time — and if we take Ray's point it'll only be 15 years rather than 50 — but I'm being a little conservative here. There's some stuff to work out, but it's just a matter of working through the details. We've seen broad strokes how to do that.

That's biology and engineering at the molecular level; there's also biology at other levels, and we see a lot of that starting to happen at the neural level. So Mussa-Ivaldi and other people at Northwestern are using neural networks — biological neural networks — to control robots. There's a little bit of wet stuff in the middle of a robot, controlling it. People at Brown and Duke are plugging into monkeys' heads, using the same machine learning techniques that are used in genome analysis, to then figure out what signals are going on in the monkeys' heads and letting the monkeys play video games, just by thinking. Or control a robot, just by thinking. And if you look in the latest Wired Magazine you'll see some human subjects, some quadriplegics, starting to be experimented with using this technology. So there's another case in which we're going from analysis of what happens inside to changing what happens inside. It's switching around the paradigm we use for biology from science to engineering.

But then there's going to be another level that happens over the next few tens of years. By playing with biology we're going to change the nature of engineering again; in the same way that over the last hundred years engineering has turned into an information science — an applied computer science — the way engineering is going to work is going to change again. The details of how it's going to change I can't say, because if I knew that it would have already happened. It hasn't happened yet. But let me give you an example of the way biological systems are going to inspire engineering systems.

In my research group we've been very inspired by looking at polyclad flatworms. These are little ruffley sorts of things that — if you scuba dive and you've ever been on a coral reef you've seen these little ruffley things move around. They're colorful and they've got ruffles around their edges. They're very simple animals. Their brain has about 2,000 neurons, and they can locomote and they can grab some food and, using their ruffles, push it into their mouth in the center. And I'm guessing — it never says in the papers, but in the 1950s there was a series of papers started, which were probably because of an accident that a grad student made.

They were seeing whether they could transplant brains between these polyclad flatworms — so they would cut the brain out of one, cut the brain out of the other, and then swap the brain — to see whether the function of these polyclad flatworms could be regained. When the brain was cut out, they were sort of dumb polyclad flatworms. They couldn't right themselves, but they could sort of locomote a little bit but not very well; if food came right near their mouth, or their feeding orifice, they'd grab it, but they couldn't pull the food in. Put another brain in, and a few days later they were sort of almost back to normal.

Here's where the mistake happened: if you take the brain and you turn it 180 degrees around and put it in backwards, the flatworm doesn't do too well. It tends to walk backwards early on, but after a few days it adapts and it's reoriented and can do stuff. In fact, if you look at the geometry of these worms, there are two sets of nerve fibers on each side — there are four nerve fibers running along the length of the body, right through the brain. If you cut the brain out, it's got four stubs this way and four stubs that way, and if you turn it around 180 degrees, the stubs can line up and regrow and adapt. And actually, if you flip it over, upside down, it works, too. If you flip it over and turn it around, it works — (some things don't quite work out). If you take the brain out and you turn it 90 degrees, it never works again. Because the stubs don't join up. If you take the brain out and cut a hole in the back of the worm, and put it down the bottom, it still works pretty well. This is just 2,000 neurons.

Imagine taking a Pentium processor out of an IBM PC and plugging it backwards into your Mac socket, and it working. That's not the sort of thing that our engineering does today, but that is the sort of thing that biology does all over the place. We're going to see, by playing around and engineering these biological systems, we're going to have a change — and this is my deep question, and John always asks what our deep questions are — we're going to see a change in our understanding of complexity and our understanding of what computation is.

We have one model of computation right now, but I'm expecting, just as computation came along out of previous mathematics, without new physics or chemistry, but was just a rethinking of fairly conventional mathematics, and the notion of computation developed from around 1937, over the next 30 or 40 years, — we're going to see some different sorts of understanding of complexity, and something which by analogy is what — as computation was to previous discrete mathematics, we're going to see this complexity understanding in relation to conventional information or computation understanding, and that then is going to change our engineering overall and the way we think about engineering over the next 50 to a hundred years. It will all be the fault of understanding biology better.


KURZWEIL: Let me agree with a lot of what you said, Rodney. We have pretty similar views but we haven't — despite being on lots of panels — fully reconciled our models.

Let me address this issue of time frames, because it's not just a matter of hand-waving and noticing an acceleration. I have been modeling these trends for the past 25 years and have a track record of predictions based on these models. People say you can't predict the future, and that's true for certain types of predictions. If you ask me, what will the results of Pfizer's torcetrapib Phase III trials be, that's hard to predict. Will Google stock be higher or lower than it is today three years from now, that's hard to predict. But if you ask me what will the cost of a MIPS of computing be in 2010, or how much will it cost to sequence a base pair of DNA in 2012, or what will the spatial and temporal resolution of brain scanning be in 2014, it turns out those things are remarkably predictable and I'll show you on Saturday a plethora of these logarithmic graphs with very smooth exponential growth. In the case of computing, it's a double exponential growth going back a century. And there's a theoretical reason for that, and we really can make predictions.

In terms of overall rate of technical progress, what I call the rate of paradigm shift, there's a doubling every decade. In the case of the power of information technology, bandwidth, price performance, capacity, the amount of information we're collecting, such as the amount of DNA data, the amount of data on the brain, the amount of information on the Internet, these kinds of measures double every year. But if we take the rate of technical progress doubling every decade to address your 1905 example, we made in the 20th century about 20 years of progress in terms of paradigm shift at today's rate of progress.

We'll make another 20 years of progress, at today's rate of progress, equivalent to the whole 20th century, in 14 years. I agree with the idea that we are perhaps a century behind in terms of understanding biology compared to say computer science, but we will make a century of progress in terms of paradigm shift in another 14 years, because of this acceleration. This is not a vague estimate, but one based on data driven models. I have a group of ten people that gathers data on just this kind of measurement, and it is surprising, but these models do have both a theoretical basis and an empirical basis. The last 50 years is not a good model for the last 50 years. At the Time Magazine future of life conference, all the speakers were asked, what will the next 50 years bring? I would say all the speakers based their predictions on what we saw 50 years ago. Watson himself said, well, in 50 years we'll see drugs that enable you to eat as much as you want and remain slim. But five to ten years is a more realistic estimate for that, based on the fact that we already know essentially how to do it and have demonstrated that already in animals.

I agree with Rodney's point that the human brain and biology work on different principles. But these are principles that we have already been applying; my own field of interest is pattern recognition: and we don't use logical analysis, we use self-organizing adaptive chaotic algorithms to do that kind of analysis, and we do have a methodology and a set of mathematics that governs that. As we're reverse engineering the brain — and that process is proceeding exponentially — we're getting more powerful models that we can add to our AI toolkit.

And finally I'd respond to the question, how is it that one can make predictions about something that's so chaotic? Each step of progress in fields like computer science and biology is made up of many tens of thousands, hundreds of thousands of projects, each of which is unpredictable, each of which is chaotic. Out of this entire chaotic behavior, it is pretty remarkable that we can make reliable predictions, but there are other examples in science where that is the case. Predicting the movement of one molecule in a gas is of course hopelessly complex, and yet the entire gas, made up of trillions of trillions of molecules, each of which is chaotic and unpredictable, has very predictable properties that you can predict, according to the laws of thermodynamics. We see a similar phenomenon in an evolutionary process — biology was an evolutionary process, and technology was an evolutionary process.

VENTER: I find it fun to sit here as a biologist and listen. I agree with a great deal of it — there's a big difference though when we're talking about engineering single-cell species versus trying to engineer more complex ones. We have a hundred trillion cells — when you try to use the same genetic code to get the same experiment done twice it never works. Quote "identical twins" don't have the same finger prints or foot prints or the same brain wiring because there are so many random events that creep in each time there's a cell division or some of these biological processes. It's pretty sloppy engineering if it's engineering.

You can't get the same answer twice. But with single cells, with bacterial cells, I agree they're absolutely going to be the power plants of the future. It's going to happen even faster than either one of you are predicting because there's the threshold now where we are trying to design a robot to build these chromosomes and build these species, that we could maybe make a million of them a day — because there's so many unknown genes that we basically have to do this in an empirical fashion and then screen for activities. Everybody's worrying about the “Andromeda Strain” sort of approach, but that's the way biology's going to progress, much more rapidly than we've seen it in this linear fashion.

BROOKS: Yes, that's an interesting point. I hate to be a nay-sayer, but Ray forces me to do it.

The idea of building a million of them and then assaying them and maybe having evolution happen in situ is the way that we are going to speed stuff up. On the computational side, about 15 years ago we thought that evolution in silicon was going to take off and there was a lot of excitement about that. But we haven't quite figured it out. We're missing something. There's been 15 years of slow progress in the artificial life field, but not the takeoff that we were thinking was going to happen in the early days of artificial life — at the Santa Fe conferences, and when the Santa Fe Institute was founded, back in the late 80s and into the early 90s.

Jack Shostak and other people have been doing real evolution in test tubes, because that's the sloppy way we know how to do it now — it may be, and this is sort of an advance which could change things, like the development of quantum mechanics, which totally rewrote physics — maybe somebody at some point will come up with understanding how to do evolution better in silicone and understand what we're missing. That will then lead to all sorts of fast progress. While I agree with statistical analysis of the future history, there are these singular events which we can't predict which will have massive influence on the way things go.

KURZWEIL: I can tell you what's missing, which is a real understanding of biology, and I said before we're at the early stages of that. We have had self-organizing paradigms like genetic algorithms and neural nets, Markov models, but they're primitive models, if you can even call them that, of biology. We haven't had the tools to be able to examine biology. We do have the tools now to actually see how biology works, we have the sequenced genome, we're beginning to understand how these information processes work, we're being able to see inside the brain, and develop more powerful models from that reverse engineering process. Then the question that comes up is, how complex is biology?

I certainly wouldn't argue that it's simple, but I would argue that it's a complexity that we can manage, and the complexity appears to be greater than it is. If you look inside the brain, look at the cerebellum, for example, and the massive variety of wiring patterns of these neurons which comprises half the neurons in the brain, yet there are actually very few genes involved in wiring the cerebellum. It turns out the genome says, well, ok, there are these four neuron types, they're wired like this, now repeat this several billion times, add a certain amount of randomness for each repetition within these constraints — so it's a fairly simple algorithm, with a stochastic random component, that comprises this very intricate wiring pattern. A key question is, how much information is in the genome? Well, there's 3 billion rungs, 6 billion bits, that's 8 hundred million bytes — 2 percent of that roughly codes for a protein, so that's 16 million bytes that describe the actual genes.

The rest of it used to be called junk DNA; we realize now it's not exactly junk, it does control gene expression, however it's extremely sloppily coded, there's massive redundancies — one sequence called ALU is repeated 300,000 times. If you take out the redundancy, estimates of compression are that you can achieve at least 90 percent compression, and then you still get something that is very inefficiently coded and has a low algorithmic content. I have an analysis showing there's about 30 to 100 million bytes of meaningful information in the genome. That's not simple, but that is a level of complexity we can handle. We have to finish reverse engineering it — but that's proceeding at an exponential pace, and we can show the progress that we're making in that. It's kind of where the genome project was a decade ago.

BROOKS: Ray, you're going to make me have to be a nay-sayer again. I hate this, but see these — they're cell phones, right? Ray's worked in image recognition, pattern recognition, I've worked in it, we can't get our object-recognition systems — which is just reverse engineering, that piddly little bit of 16 megabytes of coding for the brain, or whatever it is, to be able, to recognize across classes like a two-year-old can.

In 1966 at the AI lab there was a summer vision project, run by an undergraduate, Gerry Sussman, that tried to solve that problem. My own PhD. thesis was in that area in 1981. We still can't do generic object recognition today in 2005, and people don't even work on the problem any more, because you wouldn't be able to get funding for generic object recognition, because it's been proved that it can't be done so many times.

Instead, people work on very specific medical imaging or faces. The generic object recognition problem is a very hard problem. It's not just a matter of turning a crank on a genome for us to understand how that works in the brain. In the original proposal, written in 1966, Seymour Papert predicted we would gain the insight to be able to do generic object recognition. But it didn't happen then and it still hasn't happened. Some of these things, these singular sorts of events, we can't predict, and we can't just turn the crank-on genes to get a deep understanding of what's going on in the brain, whether computational or transcomputational.

KURZWEIL: There's any number of bad predictions that futurists, or would-be futurists have made over the decades. I won't take responsibility for these other predictions. But all you're saying, which is basically coincident with what I'm saying, is that we haven't yet reverse-engineered the brain, and we haven't reverse-engineered biology, but we are in the process of doing that.

We haven't had the tools to look inside the brain. You and I work in AI, we've gotten relatively little benefit from reverse engineering the brain in neuroscience. We're getting a little bit more now. Imagine I gave you a computer and said reverse engineer this, and all you could do was put crude magnetic sensors outside the box. You would develop a very crude theory of how that computer worked.

BROOKS: Especially if you didn't have a notion of computation ahead of time.

KURZWEIL: Yes, you wouldn't have an instruction set — you wouldn't even know it had an instruction set or an op code or anything like that. But you would say what I really want to do is place specific sensors on each individual signal and track them at very high speeds — then you could reverse engineer it, that's in fact exactly what electrical engineers do when they reverse engineer a competitive product.

Just in the last two years, we're now getting the tools that allow us to see individual interneuronal fibers, and can track them at very high speed, and in real time. There's a new scanning technology at the University of Pennsylvania that can see individual interneuronal fibers in vivo in clusters of very large numbers of neurons and actually track them signaling in real time, and a lot of data is being collected. And the data is being converted into models and simulations relatively quickly.

We can talk about what is the complexity of the brain and can we possibly manage that? My main point is that it is a complexity we can manage. But we are early in that process. The power of the tools is gaining exponentially, and this will result in expansion of our AI tool kit, and will provide the kind of methods that you're talking about. But just because we haven't done it today doesn't mean we're not going to get there. We just have the tools now to get there for the first time.

BROOKS: I absolutely agree with you that we'll get there, but I question the certainty of the timing.


QUESTION: Can you talk about computation and the brain?

BROOKS: A long time ago the brain was a hydrodynamic system. Then the brain became a steam engine. When I was a kid, the brain was a telephone switching network. Then it became a digital computer. And then the brain became a massively parallel digital computer. About two or three years ago I was giving a talk and someone got up in the audience and asked a question I'd been waiting for — he said, "but isn't the brain just like the World Wide Web?"

The brain is always — has always been — modeled after our most complex technology. We weren't right when we thought it was a steam engine. I suspect we're still not right in thinking of it in purely computational terms, because my gut feeling is there's going to be another way of talking about things which will subsume computation, but which will also subsume a lot of other physical stuff that happens.

When you get a bunch of particles and they minimize the energy of the system, then when you get a thousand times more particles it doesn't take a thousand times longer; it's not linear, it's not constant (there's a little bit of thermal stuff going on), but it's nothing like any computational process we can use to describe what happens in that minimization of energy. We're going to get to something, which encompasses computation, encompasses other physical phenomena, and maybe includes quantum phenomena, and will be a different way of thinking about what we currently call computation. That's what will become the new model for the brain, and then we'll make even more progress in knowing where to put the probes and what those probes mean, which currently we don't know too well.

KURZWEIL: Let me address it in terms of what we've done so far: Doug Hofstadter wonders, are we intelligent enough to understand our own intelligence? Implying that he doesn't think so. And if we were more intelligent and therefore able to understand it, well then our brains would necessarily be that much more complicated and we'd never catch up with it. However, there are some regions of the brain, a couple dozen, for which we actually have a fair amount of data, and we've been able to develop mathematical models as to how these regions work.

Lloyd Watts has a model of 15 regions of the auditory system and there's a model and simulation of the cerebellum and several other regions. We can apply for example psychoacoustic tests to Watts' simulation of these 15 regions, get very similar results as we get applying psychoacoustic tests to human auditory perception. It doesn't prove it's a perfect model, but it does show it's in the right direction. The point is, these models will be expressible in mathematics. Then we can implement simulations of mathematics in computers. That's not to say that the brain is a computer, but the computer is a very powerful system to implement any mathematical model. Ultimately that will be the language of these models.

QUESTION: History is full of wars and many other unpredictable events. There also appears to be a growing anti-technology movement. Don't these phenomena affect the pace of progress you're talking about?

KURZWEIL: It might look that way if you look at specific events, but if you look at the progress for example of computation for which we have a very good track record through the 20th century — which was certainly a very tumultuous time, with two world wars and a major depression in the United States and so on — we see a very smooth exponential growth, double exponential growth; you see a slight dip during the Depression, a slight acceleration during World War II.

Very little of the population historically has actually been involved with the progress hundreds of years ago. A few people — Newton, Darwin — were advancing scientific knowledge. We still have very strong reactionary forces, but a much more substantial portion of the population is applying its intellectual power to these problems and to advancing progress and it's amplified by our technology; we certainly routinely do things which would be impossible without our technology. Rodney mentioned the role of powerful computers and software in doing the genome project.

The kind of social reaction we see now —Luddite reactions, reflexive anti-technology movements, and so on — are really part of the process, they don't slow things down. Even stem cell research is continuing — some people identify stem cell as comprising all of biotechnology, but it's really just one methodology and even it is continuing. These social controversies tend to be like stones in a river: the flow of progress goes around them. And if you track the actual progress in these fields and measure it in many different ways, with dozens of different types of measures, you see a smooth exponential process.

QUESTION: Was there exponential progress in technology hundreds or thousands of years ago?

KURZWEIL: It would be very hard to track them because technological progress was so slow then, but I do have a chart I'll show you that goes back a very long time in terms of the whole pace of biological and technological evolution and it has been a continually accelerating process, and I do postulate that it is a fundamental property of an evolutionary process. The idea of progress now is so deeply rooted, despite the fact that there are people who don't believe in it and movements that object to it. It's so deeply rooted that it really is an evolutionary process at this point.

VENTER: It's a very good point and maybe it doesn't affect the overall predictions, but it certainly affects reality: There are an awful lot of people who don't go into stem cell research right now, a lot has been shut down, a lot of scientists have had to leave the country to try and continue their research, a lot of money that could have gone to it has been diverted.

It's probably the single most important area, if we're going to ever understand how our brains got wired: if we don't understand how stem cells work, we'll never understand complex biology beyond single cell organisms. We saw the same thing with synthetic biology when we made the PhiX 174 virus and injected the DNA into e-coli and had it start producing viral particles just driven by the synthetic DNA; there ensued a huge debate within the U.S. government of whether to classify our research and shut us down and not enable us to publish our data because it might enable bioterrorism.

KURZWEIL: Just to clarify, I do strongly endorse a free system and I'm opposed to any constraints on stem cell research. Rodney and I work in a field that has no certification of practitioners ­your software developers aren't licensed software developers, and there's no certification of products, despite the fact that software is deeply influential. My feeling is that we don't balance risks appropriately in the biological field, and people are very concerned now that the FDA should be more strict because nobody wants drugs that are going to harm people; on the other hand we have to put on the scale the impact of delaying things.

If we delay stem cell research, gene therapy, some heart disease drug, by a year, how many hundreds of thousands of lives will be disrupted or destroyed by that delay? That is very rarely considered because if you approve a drug and then it turns out to be a mistake, a lot of attention goes on that. If something is delayed nobody really seems to care, politically. We should move towards an open system — there are down sides to all of these technologies, there's risks. Bioterrorism is a concern not just in using well-known bioterrorism agents but creating new ones. But we make the dangers worse, because a bioterrorist does not have to put his invention through the FDA. But scientists like Craig, who are working to defend us, are hampered by these regulations.

BROOKS: Let me follow up on Craig's point. Scientists are being driven off-shore — but also we have the problem that because of the September 11 event we are having trouble getting foreign students into our universities, and that's really slowed things down. The students who are here are still very scared of going home or going to a conference outside the country, and we just can't get as many students as we could before, and that's having a real impact and it's slowing a lot of things down, and other singular events could change things quite drastically too.

VENTER: The immigration issue has been flagged as the number one issue at the National Academy of Sciences affecting the future quality of science and medicine in this country.

KURZWEIL: You'll be pleased to know that things are not slowed down in China and India. I've got some graphs showing engineers' levels are declining in the United States; they're soaring in China. China's graduating 300,000 engineers a year compared to 50,000 in the United States.

QUESTION: What do you see happening in the immediate future?

BROCKMAN: I can answer that for the panel. Nobody knows and you can't find out. And you don't have to ask permission.

KURZWEIL: There are lots of things that are not predictable. The attributes representing the power of information technology turn out to be predictable. It is a chaotic process but technology, and certainly information technology proceeded very smoothly through World War II despite the fact that it was a very destructive time. We don't know what the future will bring.

These technologies aren't necessarily beneficial. Everybody in this room is trying to apply these capabilities to further human values and overcome disease and so on, but it would only take a few individuals to cause a lot of destruction. We don't know how these technologies will be applied. We can discuss how best to allow creative projects that will advance human knowledge and reduce human suffering to advance more quickly, but the future hasn't been written, even if certain attributes of information technology are predictable.

QUESTION: Despite the many compelling analogs between hardware and wetware, if you will — there are still very profound differences ... one of the really important attributes of biological systems is that there's tremendous plasticity ... [flatworm, 2000 neurons, etc.] I'm wondering what you see as the most compelling engineering exemplar of the systems that have some sense of plasticity — we look at dynamic adaptation or things of that kind, neural networks, really serious computation brings us to computer science... Where do you see that going?

KURZWEIL: We're in the early stages of a great trend, which is to apply biologically inspired models. As we learn more precisely how biology works, we'll have more powerful paradigms. But there are a lot of self-healing systems that are adaptive — there's been a lot of progress in the last five years on three-dimensional molecular electronics, which is still formative, but these circuits are going to have to be self-organizing and self-healing and self-correcting, because if you have trillions of components, you can't have one misplaced wire or one blown fuse destroy the entire three-dimensional mechanism.

Even circuits that are on the market today that are nominally flat have so many components that they are beginning to incorporate these self-healing mechanisms where they can route information around areas that are not functioning. The Internet itself does that, and as we get to the World-Wide Mesh concept, it will become even more self-organizing. Right now all of these little devices are spokes into the Internet, they're not nodes on the Internet.

But we're going to move to where every single device is a node on the Internet. So in addition to allowing me to send or receive messages while I'm sitting here, this phone will be transmitting and forwarding messages from other people. I will be a node on the Internet, and my phone will allow that to happen because I will also be taking advantage of this mesh capability. We are moving towards much more of a self- organizing self-healing paradigm. IBM has a big project on self-healing software to manage IT networks. There are a lot of examples of that.

VENTER: It's a critical point. As part of trying to make synthetic life, we're having to come up with a definition of what is life, and one of the key components that we've found in every genome that we've done is a gene called REC-A, involved in DNA repair. Repairing DNA is one of the most fundamental components of life, but an even more fundamental component we've seen in every species that we've determined is built-in mechanisms for continued evolution. The plasticity that we see even the simplest sets of genes in a single cell — that's in fact why we're having to build a cell from scratch, because we can't determine empirically what genes cover up for the functions of others — but every genome we've done has built in into the DNA mechanism some of these remarkably simplistic.

And haemophilus influenza — every one of us in this room has a different strain of it in our airways, because it continually goes through Darwinian evolution in real time. There's these tetrameric repeats — four base repeats — in front of all the genes associated with the cell surface proteins and lipoproteins. And every 10,000 or so replications, the preliminary slips on these — that's called slip-strand mispairing — and it shifts the reading frame and the genes downstream, basically knocking them out.

Just by knocking out genes in a random fashion, it constantly changes what's on the cell surface, and that's why our immune system can't ever catch up. It's constantly winning the war against our immune system, and we have basically millions of organisms in us that are adapting in real time. The complexity hasn't even begun to be approached in terms of what biological systems can and really do.

QUESTION: What about synthetic genes?

VENTER: It's an issue that we're facing: whether or not we have synthetic genomics. We talked earlier about the influenza virus; you get two of these viruses in one animal and they can recombine to form all kinds of new molecules that put — if it's like the last pandemic — 75 million people at risk for death very rapidly. There's little difference between a new emerging infection and a deliberately-made pathogen in terms of the impact it has on humanity. We need defenses against them. I've argued that it's never been a waste of the government's money to try to develop new antivirals or new antibiotics or new approaches to treat these infections across the board, whether or not there's ever a bioterrorism event.

The new technology that we've developed, wherein we can synthesize a virus in two weeks, does have some clear implications if someone really wanted to do harm through these techniques. Some of the arguments around it are that it's a lot easier to obtain any kind of lethal organism you want through much simpler means than trying to synthesize it. All the efforts that went on to try and change smallpox and anthrax were major state-supported events in the U.S. and in the former Soviet Union.

These are not simple processes right now. We could be in the place in ten years where you could be the first one on the block that builds your own species in a garage, but we're not quite there yet, and at the same time these events — and we get literally thousands or millions of human-made organisms — increase the chances of getting new approaches for counter-acting them. On the engineering side it's easy to build into them mechanisms so that they can't self-evolve, so that they can't survive if they escape from the laboratory. But with these same techniques we've developed it wouldn't be difficult certainly for our group to build a smallpox virus based on the DNA sequence in a month or two.

KURZWEIL: There are a couple of characteristics that affect the danger of a new virus; obviously how deadly it is and how easily it spreads. But probably the most important is the stealthiness. SARS spread pretty easily and actually it's pretty deadly, but it's not that stealthy because the incubation period is fairly short. New naturally emerging viruses don't tend to have the worst characteristics on all of these dimensions, so one could, if one were pathologically minded, try to design something that was at the extreme end of these various spectrums.

I did testify before Congress recently advocating that we greatly accelerate the development of these defensive technologies. It's true that it's not easy to create an engineered biological virus, but the tools and the knowledge and skills to actually create such a bioengineered pathogen is more wide-spread than the tools and the knowledge to create say an atomic bomb and could potentially be more dangerous. We are pretty close at hand at some very exciting broad-spectrum antiviral techniques. We could apply for example RNA interference and other emerging techniques to provide a very effective defensive system. It's a race: we want to make sure we have an effective defense when we need it. Unfortunately the political wheel doesn't get galvanized unless there's some incident. Hopefully we can interest the funding sources before it's needed.


EDGE BOOKS


Curious Minds: How a Child Becomes a Scientist (Pantheon)

All new essays by 27 leading Edge contributors..."Brockman has produced a compelling piece of literature."—Nature Medicine "Some of the biggest brains in the world turn their lenses on their own lives...fascinating...an invigorating debate."—Washington Post "Compelling."—Discover " An engrossing treat of a book...crammed with hugely enjoyable anecdotes ...you'll have a wonderful time reading these reminiscences."—New Scientist "An intriguing collection of essays detailing the childhood experiences of prominent scientists and the life events that sparked their hunger for knowledge. Full of comical and thought-provoking stories."—Globe & Mail "An inspiring collection of 27 essays by leading scientists about the childhood moments that set them on their shining paths."—Psychology Today "Good, narrative history, combined with much fine writing...quirky, absorbing and persuasive in just the way that good stories are."—Nature

Published in the UK as When We Were Kids: How a Child Becomes a Scientist (Jonathan Cape)

amazon.com
| b&n.com | amazon.co.uk


The New Humanists: Science at the Edge (Barnes & Noble)

The best of Edge, now available in a book..."Provocative and fascinating." La Stampa "A stellar cast of thinkers tackles the really big questions facing scientists."The Guardian "A compact, if bumpy, tour through the minds of some of the world's preeminent players in science and technology." — Philadelphia Inquirer "What a show they put on!"— San Jose Mercury News "a very important contribution, sparkling and polychromatic."Corriere della Sera

Published in the UK as Science at the Edge (Weidenfeld)
Available at Barnes & Noble stores | b&n.com | amazon.co.uk


The Next Fifty Years:
Science in the First Half of the Twenty-first Century
(Vintage)

Original essays by 25 of the world's leading scientists..."Entertaining" —New Scientist "Provocative" —Daily Telegraph "Inspired"—Wired "Mind-stretching" —Times Higher Education Supplement "Fascinating"Dallas Morning News "Dazzling" —Washington Post Book World


John Brockman, Editor and Publisher
Russell Weinberger, Associate Publisher

contact: [email protected]
Copyright © 2005 by
Edge Foundation, Inc
All Rights Reserved.

|Top|