ON "CREATION OF A BACTERIAL CELL CONTROLLED BY A CHEMICALLY SYNTHESIZED GENOME" BY VENTER ET AL" [5.20.10]
On May 20th, J. Craig Venter and his team at J.C Venter Institute announced the creation of a cell controlled by a synthetic genome in a paper published in SCIENCE. As science historian George Dyson points out, "from the point of view of technology, a code generated within a digital computer is now self-replicating as the genome of a line of living cells. From the point of view of biology, a code generated by a living organism has been translated into a digital representation for replication, editing, and transmission to other cells."
This new development is all about operating on a large scale. "Reading the genetic code of a wide range of species," the paper says, "has increased exponentially from these early studies. Our ability to rapidly digitize genomic information has increased by more than eight orders of magnitude over the past 25 years." This is a big scaling up in our technological abilities. Physicist Freeman Dyson, commenting on the paper, notes that "the sequencing and synthesizing of DNA give us all the tools we need to create new forms of life." But it remains to be seen how it will serve in practice.
One question is whether or not a DNA sequence alone is enough to generate a living creature. One way of reading the paper suggests this doesn't seem to be the case because of the use of old microplasma cells into which the DNA was inserted — that this is not about "creating life" since the new life requires an existing living recipient cell. If this is the case, what is the chance of producing something de novo? The paper might appear to be about a somewhat banal technological feat. The new techniques build on existing capabilities. What else is being added, what is qualitatively new?
While it is correct to say that the individual cell was not created, a new line of cells (dare one say species?) was generated. This is new life that is self-propagating, i.e. "the cells with only the synthetic genome are self replicating and capable of logarithmic growth."
The paper concludes with the following:
Will the new techniques described in the paper allow us to bring extinct species back to life? Here are three examples of three possible stages after the production of a bacterial cell: 1. generating a human, i.e. a Neanderthal; 2. generating a woolly mammoth; 3. generating a tasmanian wolf.
Generating a Neanderthal, given the recent mapping of the Neanderthal genome by Svante Pääbo, seems to be feasible, but it will raise ethical hackles. Don't hold your breath waiting for someone to try it. Generating a woolly mammoth will not be an ethical problem but it also seems feasible by using an elephant's placenta: inject mammoth DNA into a modern elephant egg from which elephant DNA has been removed, then import the elephant egg into an elephant. A real challenge will be to generate a truly extinct species such as a Tasmanian wolf for which no host cells exist.
What does this mean? We don't know yet, and we may not know for years. For now, all we can do is speculate responsibly. As Freeman Dyson notes:
Life goes on.. but it won't be the same.
To provide context, we have put together a retrospective of Edge events, transcripts, and videos featuring the pioneers in this area who are among the key players in what we are calling "A New Age of Wonder" [click here]
The Edge Reality Club discussion on the paper, "Creation Of A Bacterial Cell Controlled By A Chemically Synthesized Genome," is below.
A retrospective of Edge events, transcripts, videos featuring the pioneers of synthetic genomics.
The Observer profile
By Tim Adams
...Stewart Brand, the ecological visionary and creator of the Whole Earth Catalog, is more persuaded. Brand has got to know Venter over the last couple of years through John Brockman's Edge initiative which brings together the world's pioneering minds. What differentiates Venter from many of his peers, Brand believes, is that he is not only a brilliant biologist, but also a brilliant organisational activist. "A lot of people can think big but Craig also has the ability to fund big: he doesn't wait for grants, he just gets on and finds a way to do these things. His great contribution will be to impress on people that we live in this vast biotic of microbes. What he has shown is that microbial ecology is now where everything is at."
Brand once suggested that "we are as gods and we might as well get good at it". That statement has gained greater urgency with climate change, he suggests. "Craig is one of those who is rising to the occasion, showing us how good we can be."...
This paper reminds me of a saying that is well-known to pure mathematicians: "Every big discovery starts with a bad proof." This is true in mathematics. The first proof in a new subject is bad, because the discoverer is a first-rate mathematician, struggling to overcome one obstacle after another and not caring about elegance. Afterwards, second-rate mathematicians tidy up the details and find good proofs.
I think the same saying holds good in science if you replace "proof" by "experiment". This experiment, putting together a living bacterium from synthetic components, is clumsy, tedious, unoriginal. From the point of view of aesthetic and intellectual elegance, it is a bad experiment. But it is nevertheless a big discovery. It opens the way to the new world of synthetic biology. It proves that sequencing and synthesizing DNA give us all the tools we need to create new forms of life. After this, the tools will be improved and simplified, and synthesis of new creatures will become quicker and cheaper. Nobody can predict the new discoveries and surprises that the new technology will bring. I feel sure of only one conclusion. The ability to design and create new forms of life marks a turning-point in the history of our species and our planet.
The major effect of this paper will be to force a redefinition of life, since we declare that nothing we manufacture can be life.
There are two ways of looking at this experiment. From the point of view of technology, a code generated within a digital computer is now self-replicating as the genome of a line of living cells. From the point of view of biology, a code generated by a living organism has been translated into a digital representation for replication, editing, and transmission to other cells.
In 1953, when the structure of DNA was determined, there were 53 kilobytes of high-speed electronic storage on planet earth. Two entirely separate forms of code were set on a collision course. Primitive as it may be, we now have one of the long-awaited results.
Seven generation sustainability is an ecological concept that urges the current generation of humans to live sustainably and work for the benefit of the seventh generation into the future.
The Seventh Generation originated with the Iroquois when they thought it was appropriate to think seven generations ahead (a couple hundred years into the future) and decide whether the decisions they make today would benefit their children seven generations into the future. (Lyons O, An Iroquois Perspective.)
Venter's experiment is a tour-de-force with many implications. The DNA of the synthetic cell contains segments — watermarks, one of which bears the words of the famous physicist Richard Feynman "What I cannot build, I cannot understand".
While astronomers beg to disagree — they understand our Sun very well, the big news in M. mycoides JCVI-syn1.0 is that we can build it. This act re-defines life as we know it, and tells us something about the future of a universe that took 13.7 billion years to build its own blocks and tools for life. That could be our own future.
To find out we must look at the stars — is there life on other planets? To succeed in that search we must understand life, and to understand life, we must build it. The first steps have been taken.
DANIEL C. DENNETT
The achievement of Craig Venter and his team is certainly a major milestone in technology, and his forecast of the stupendous benefits that may be reaped is, if anything, understated. Now we need to ask how this new technology should be regulated. There is no doubt at all that self-replicating bacteria (and other microbes) with artificial genomes could do more harm than good if they escaped our control. They will not just replicate but evolve, mutating swiftly unless we take special steps in advance to prevent this from happening, and even then there will be a risk — not large, but not ignorable — of seeing our preventive efforts, whatever they are, being undone by mutation. Evolution is as unrelenting as gravity, an omnipresent prevailing wind that unties the knots, unlocks the doors, seeking out every escape route as assiduously as the inmates in a prison.
At first glance, the problems seem straightforward and not insuperable. We need to apply the lessons learned in other novel technologies. Nuclear reactors are equipped with "fail safe" systems of considerable ingenuity and reliability (nothing is perfect). Like air brakes (in which the default position is ON, the pressure being provided by powerful springs, held in check by air pressure), the default position of the control rods in nuclear reactors is IN, maintained by gravity unless held OUT by positive forces. We should equip all artificial life forms with similarly designed default DIE WITHOUT OFFSPRING mechanisms held in check by some positive contribution we can swiftly remove when we need to put the brakes on. If artificially designed life forms can be kept exquisitely vulnerable, doomed to immediate extinction unless they get their supply of X, and we control the supply of X, we can keep them on a short leash (and if we, their controllers, get distracted or disabled in any way, they die).
This is just one obvious step we must take, and it is probably not all that hard to achieve. Unlike the disease organisms and viruses that are proving so adept at evading our efforts to suppress them biochemically, laboratory-created life forms will not be cryptic but close to transparent: we will know a lot about them from the inside out, and all the troubles we have overcome in learning how to keep them alive will give us lots of insight into just what they need to stay alive.
But of course such a "fail safe" system is not itself foolproof. We will want to have further provisions in force, and probably, as with nuclear materials, the main problem confronting us will be the possible roles of deliberate human sabotage, or just irresponsible human curiosity. With a technology of such power, the temptation to explore its powers informally will be ubiquitous. And here the parallel with the safeguards of nuclear technology is misleading; a more ominous parallel is with cyber-technology. Fortunately for us all, enriching fissionable material is still, more than sixty years after it was first done, a very expensive, high-tech process, not something a hobbyist can do clandestinely in his basement. Devising state-of-the-art cyberattack weapons, in contrast, can be done by smart high school kids in their bedrooms, at almost no cost. The result is that we are shockingly vulnerable to anyone who sets out to develop a large-scale cyberattack. The arms race favors offense over defense by a huge margin: it is orders of magnitude cheaper and easier to develop cyberoffense than to defend against it.
Once the techniques honed by Venter and his team become widely known, will it be utterly beyond the capabilities and budgets of, say, well-trained biology majors to develop their own artificial life forms? That is not at all clear. What good will it do to have international agreements about the obligations of laboratories to equip their creations with default-apoptosis machinery if there are thousands of free-lancers engaging in bio-hacking? The price we will pay for this huge amplification of our technological prowess is probably an equal and opposite vulnerability. Welcome to the fast lane, humanity.
NASSIM N. TALEB
If I understand this well, to the creationists, this should be an insult to God; but, further, to the evolutionist, this is certainly an insult to evolution. And to the risk manager/probabilist, like myself & my peers, this is an insult to human Prudence, the beginning of the mother-of-all exposure to Black Swans. Let me explain.
Evolution (in complex systems) proceeds by undirected, convex bricolage or tinkering, inherently robust, i.e., with the achievement of potential stochastic gains thanks to continuous and repetitive small, near-harmless mistakes. What men have done with top-down, command-and-control science has been exactly the reverse: concave interventions, i.e., the achievement of small certain gains through exposure to massive stochastic mistakes (coming from the natural incompleteness in our understanding of systems). Our record in understanding risks in complex systems (biology, economics, climate) has been pitiful, marred with retrospective distortions (we only understand the risks after the damage takes place), and there is nothing to convince me that we have gotten better at risk management. In this particular case, because of the scalability of the errors, you are exposed to the wildest possible form of informational uncertainty (even more than markets), producing tail risks of unheard proportions.
I have an immense respect for Craig Venter, whom I consider one of the smartest men who ever breathed, but, giving fallible humans such powers is similar to giving a small child a bunch of explosives.
Knowing little about genome engineering, shall I meander into more ancient precedents?
What do we remember, the first illuminated manuscript or the Gutenberg printing press? The first car or the first affordable car (Ford's model-T)? The first computer or the first popular personal computer (from Woz & Jobs)? The first 121 Edison power stations delivering direct current in 1887 or the AC electric grid from Nikola Tesla?
Do we prefer the first DNA model, Pauling's triple helix, or Watson and Crick's double helix? The first atom bomb or the last? The first authors on PCR, Kleppe in 1971, Saiki in 1985, or the innovator who brought it to practice — Kary Mullis? The first human genome in 2004 for $3 billion or the first affordable ($1500) personal genome sequence in 2009?
Returning to the topic of genome engineering, are we looking for the first construction of a tiny genome (for $40 million) or a larger genome already cranking out green chemistry? Do we applaud the first rationale for engineering whole genomes ("Because it's there" — a la George Mallory, who died climbing Everest) — or seek a more compelling and nuanced articulation — "to make virus-resistant production strains, engineering standards, safety features, new bio-polymers, mirror-chemistries, and bring the extinct back to life"?
Evolutionary Biologist; Emeritus Professor of the Public Understanding of Science, Oxford; Author, The Greatest Show on Earth
Craig Venter’s Brave New World
Craig Venter's artificial bacterium debuted almost simultaneously with Svante Pääbo's publication of the greater part of the Neanderthal genome. Put the two together and ask whether we could — or should — recreate a living, breathing Neanderthal. Of the technologies that would be required, the Venter team has proofed an important component. Dolly was cloned from an entire diploid genome of an adult sheep's udder cell, dropped into an enucleated ovum. The Venter equivalent of Ian Wilmut's achievement would be to go to the library (or in this case the Internet), take down the book labelled 'Sheep Genome Project' (or rather download the data files), and synthesize a complete set of sheep chromosomes from four bottles of chemicals labelled A, T, C and G. The synthetic genome would then be dropped into an enucleated sheep cell, as per Dolly.
While they were about it, the team might improve on the genome of any one donor sheep by substituting, say, wool-growing genes from The Champion Merino Genome Project and hardiness genes from The Soay Genome Project. Maybe some code from the Goat Genome Project to broaden the creature's preferred diet, or from the Chamois Genome Project to give it a better head for heights? Perhaps even a Cut and Paste job from the Otter Genome Project, to give the über-sheep a taste for water sports.
We'd need to do something similar to re-grow a Neanderthal from Svante Pääbo's data. Or, later, a computed intermediate between the chimpanzee and human genomes to re-create the 6-million-year-old common ancestor. And then, might a born-again Lucy split the difference again?
The technical difficulties would be formidable, but present progress suggests that they will be overcome. I leave the speciesist ethical difficulties on one side, except to note that ethical thinking, too, has a way of progressing as the decades go by. There is the harder problem that Pääbo's Neanderthal sequence is only 60 percent complete, and 100 percent may be unattainable. Presumably the residue would be coloured in from the H. sapiens genome, and that could create technical problems as well as compromise the authenticity of the clone as a 'true' Neanderthal.
But Neanderthal bones are tens of thousands of years old. Should we disinter Charles Darwin's bones from Westminster Abbey with the same insouciance as the Roman Catholic Church is now displaying toward the remains of his contemporary, Cardinal Newman? Might a new identical twin brother of the great naturalist ride shotgun to Craig Venter's future twin, on a round-the-world DNA-harvesting voyage? Could Darwin Junior be mathematically enhanced by a few judicious splicings from the Albert Einstein Genome Project? Or get a head-start in molecular genetics by strategic borrowing from the Francis Crick Genome Project? The Jeremy Bentham Genome Project might suffer utilitarian doubts over whether the taxidermic curiosity in the Entrance Hall of University College, London still contains any of his authentic remains.
Of course no steps were taken to preserve the DNA of any of these great men. Today's equivalents don't need to be cryogenically preserved for the Craig Venters of the future. Nothing so messy or expensive. Give or take some epigenetic mark-ups, a simple computer disk is all it takes: just miles and miles of A, T, C, G.
And the J Craig Venter Genome Project is already on line ...
Biologist, University of Minnesota; blogger, Pharyngula
I have to address one narrow point that is being discussed in the popular press and here on Edge: is Venter's technological tour de force a threat to humanity, another atom bomb in the hands of children?
There is a threat, but this isn't it. If you want to worry, think about the teeming swarms of viruses, bacteria, fungi, and parasites that all want to eat you, that are aided (as we are defended) by the powers of natural selection — we are a delectable feast, and nature will inevitably lead to opportunistic dining. That is a far, far bigger threat to Homo sapiens, since they are the product of a few billion years of evolutionary refinement, not a brief tinkering probe into creation.
Nature's constant attempts to kill us are often neglected in these kinds of discussions as a kind of omnipresent background noise. Technology sometimes seems more dangerous because it moves fast and creates novelty at an amazing pace, but again, Venter's technology isn't the big worry. It's much easier and much cheaper to take an existing, ecologically successful bug and splice in a few new genes than to create a whole new creature from scratch…and unlike the de novo synthesis of life, that's a technology that's almost within the reach of garage-bound bio-hackers, and is definitely within the capacity of many foreign and domestic institutions. Frankenstein bacteria are harmless compared to the possibilities of hijacking E. coli or a flu virus to nefarious ends.
The promise and the long-term peril of the ability to synthesize new life is that it will lead to deeper understanding of basic biology. That, to me, is the real potential here: the ability to experimentally reduce the chemistry of life to a minimum, and use it as a reductionist platform to tease apart the poorly understood substrates of life. It's a poor strategy for building a bioweapon, but a great one for understanding how biochemistry and biology work. That is the grand hope that we believe will give humanity an edge in its ongoing struggle with a dangerous nature: that we can bring forethought and deliberate, directed opposition to our fellow organisms that bring harm to us, and assistance to those that benefit us. And we need greater knowledge to do that.
Of course more knowledge brings more power, and more possibility of catastrophe. But to worry over a development that is far less immediately dangerous than, say, site-directed mutagenesis, is to have misplaced priorities and to be basically recoiling from the progress of science. We either embrace the forward rush to greater knowledge, or we stand still and die. Alea iacta est; I look forward to decades of revolutionary new ideas and discoveries and technologies. May we have many more refinements of Venter's innovation, a flowering of novel life forms, and deeper analyses of the genome.
Panasonic Professor of Robotics (on leave); MIT Computer Science and Artificial Intelligence Lab; Author Flesh and Machines: How Robots Will Change Us
The work reported last week in Gibson et al was certainly a technical tour de force. But it was not a scientific surprise in the way that Venter's decoding of the human genome using gunshot sequencing was a surprise — that just seemed too big a job for the combinatorics not to bog the process down. Nor was it as surprising as Venter's previous work where he and his team removed 100 out of 485 protein coding genes of what was already the shortest known genome of an organism capable of independent growth, and still the new genome supported continued growth and reproduction.
Though not a scientific surprise the new work seems to have awakened the press to certain realities that all molecular biologists have believed at their very cores for decades, but the fuss from both the press and ethicists does not follow logically from what has been achieved.
As the paper's title explicitly says, the team have built a line of cells, where the ancestor genome was chemically synthesized. The ancestor cells all started out "life" as cells of a different species, naturally produced. Their DNA was replaced by a string of just over a million base pairs of synthetically produced DNA. The cells then continued to reproduce and to faithfully copy that synthesized DNA.
So is this synthetic life? Yes, and no.
It is synthetic life in that the genome is synthetic. Besides being built from over a thousand separately constructed subpieces the genome differs at 19 base pairs from the wild type. And then the researchers also substituted four watermarks, containing codes of their names and an email address, using a total of 4,658 base pairs. But the fact that the genome works as a genome is not a suprise to molecular biologists. They have long believed that life is chemistry, and that one string of connected atoms is just as good as another having the same arrangement. They have long ago discounted they idea that there is any sort specialness imparted to a molecule by its history of production. Molecules have no souls.
But the new cells are also not synthetic life in that the ancestor cell was an existing live cell. It was not built from pieces in the same way that the synthetic genome was built. That is another, perhaps harder technological challenge, but also one that there may be no imperative to try to achieve in the short term; hijacking existing cells may be all that we need to develop all sorts of new synthetic forms.
The press has both overplayed that what has been done is a surprise, and underplayed the interesting challenges that lie ahead, in that their biggest fears do not automatically follow from the current achievement.
Here are some next steps, which more than creative hard technological work, will also require a few new scientific suprises to be discovered:
By then the ethicists will have something to worry about.