187 — June 29, 2006
As creative as we become, and as industrious and as good as we are at designing and manufacturing living things, which we've been doing since the stone age — no matter how good we get at that, it's like calling a candle a supernova. A candle is not a super nova; it's not even in the same league. And we, as intelligent designers, are not in the same league as the "Intelligent Designer" that designed the whole shebang. We're not designing sub-atomic particles from scratch; we're not designing the Big Bang. We're really not even designing life; we're just manipulating it.
The collection of 16 essays from experts begins with Jerry A. Coyne's piece about evidence of evolution buried in our DNA.
"Our genome is a veritable farrago of non-functional DNA, including many inactive `pseudogenes' that were functional in our ancestors," he notes. "Why do humans, unlike most mammals, require vitamin C in their diet? Because primates cannot synthesise this essential nutrient from simpler chemicals."
It seems we still carry all the genes for synthesising vitamin C though the gene used for the last step in this pathway "was inactivated by mutations 40 million years ago, probably because it was unnecessary in fruit-eating primates."
Tim D. White's piece takes one through volcanic rock samples `fingerprinted at the Los Alamos National Laboratory', and fossils aged millions of years. "Today, evolution is the bedrock of biology, from medicine to molecules, from AIDS to zebras," declares White.
"Biologists can't afford to ignore the interconnectedness of living things, much as politicians can't understand people, institutions or countries without understanding their histories.
It's fairly safe to say that most Canadians couldn't tell a wormhole from a doughnut hole, nor explain the basic mechanics of global warming, nor distinguish between Fermat and Fibonacci.
It's all too easy to put this down to simple fear of science, but that doesn't exculpate us from attempting to understand at least some of what is the best existing explanation -- pace various fundamentalisms -- for the workings of the universe and its contents. Of course, science has its enemies -- not just among the hyper-religious, but also many postmodernists, who see it as simply one among a competing array of equally valid master narratives. But at least ever since Aristotle, mankind has been consumed by a desire to understand the universe and our place in it. So why should Globe Books be any different? Our commitment to reviewing science books is part curiosity, part missionary. But we don't get to nearly as many as we'd like, so I offer a breathless roster of new titles well worth your consideration.
What We Believe But Cannot Prove: Today's Leading Thinkers on Science in the Age of Uncertainty. More than 100 minds, some doubtless great (including Ian McEwan, Robert Sapolsky, Stephen Pinker, Jared Diamond and Rebecca Goldstein), ponder the question: What do you believe to be true even though you cannot prove it? For me, the answer is Sherlock Holmes, case proved in . . .
Jimmy Wales und der Planet Wikipedia
Wikipedia ran into criticism because of its open standard, in which false information can find its way into contributions. A recent edition of the Web forum www.edge.org focuses on Wikipedia in a debate over the borders of the Collectivism in the Internet.
creative as we become, and as industrious and as good as we are at
designing and manufacturing living things, which we've been doing
since the stone age — no matter how good we get at that, it's
like calling a candle a supernova. A candle is not a super nova;
it's not even in the same league. And we, as intelligent designers,
are not in the same league as the "Intelligent Designer" that
designed the whole shebang. We're not designing sub-atomic particles
from scratch; we're not designing the Big Bang. We're really not
even designing life; we're just manipulating it.
Think of the cell as operating system, and engineers taking the place of traditional biologists in retooling stripped down components of cells (bio-bricks) in much the vein as in the late 70s when electrical engineers were working their way to the first personal computer by assembling circuit boards, hard drives, monitors, etc. It's not an accident that the phrase "bio-hackers" is in the conversation, as this new crowd has a lot in common with the computer engineers who were around the homebrew computer club of the '70s leading the development of the personal computer.
Central to this move to engineer biology, to synthesize life, is Harvard researcher George Church.
"Today I am involved in a number of synthesis and sequencing endeavors," he says. "First, the BioFab group works together on 'constructive biology', which has a number of tightly overlapping parts of a Venn diagram."
"There's IGEM, 'International Genetically Engineered Machines' group, which is now in its fourth year , and has 39 universities involved. It's a very interesting social phenomenon; it involves wiki's and a lot of undergraduates, 39 teams of 10 to 20 people each. It's amazingly intense and enjoyable — kind of like the robot competitions, or the DARPA Grand Challenges. They compete to make cool things during the summer, and some go year-round working on those cool things — engineering life.
"Some of the people who started that group are also part of BioBrick Foundation, a non-profit, and a company called Codon Devices. So the founders of the field are defined by the intersection, or union, of those sets, depending how you look at it.
"BioFab group is also a subset of the Codon Devices scientific advisory board. And that's a Cambridge company that does synthetic biology. We're distinct from IGEM and the BioBrick Foundation and other synthetic biology groups that are emerging. "
Church points out that "almost every new thing is a combination of two old things. This is a kind of a union of engineering design principles that might be familiar to people in large-scale integrated circuits, combining that with genetic engineering, metabolic engineering, both of which are older — decades old, not ancient — and systems biology, which itself is a combination of feedback concepts, differential equations and so forth — those could be incorporated as well. There's also some bringing together of the chemistry and automation to make DNA — large highly accurate pieces of DNA — combining in concepts of laboratory evolution, which is relatively new. These things all meet together — kind of all these streams flowing together suddenly, all at once, into synthetic biology. Enough old things brought together into a new package that it consitutes an invention, a new field."
Unlike typical labs, a BioFab "Lab" can make a copy of itself. "Once you have a really great engineered biology system, you can make as many copies of it as you want: you could scale it up… (it does it itself; it's self-assembling). It's a dream of mechanical, electrical, and chemical Fab Labs — if they ever made, say, a milling machine that could make a copy of itself. That would be great. Then they'd have a self-replicating machine; that would be a milestone."
are inevitable questions surrounding Church and his colleagues
about "playing God" and there are also concerns about
the kinds of bio-terror, lab accidents, and Frankenstein-like creations
that have informed the writings of such thinkers as Bill Joy and
Lord (Martin) Rees. These concerns were addressed by researchers
in the field last month at SythenticBiology2.0,
the second annual conference in this new field, which was convened
at US-Berkeley. According to their Web site, "the SB2.0 community
is developing a written statement describing some principles for
advancing this new field in a safe and effective way, based on
the third day of discussions and input from conferees. A pdf draft
of the declaration is available here."
GEORGE CHURCH is Professor of Genetics at Harvard Medical School and Director of the Center for Computational Genetics. His current research focuses on integrating biosystems-modeling with personal genomics & synthetic biology.
(GEORGE CHURCH:) The biggest questions I'm asking myself, at least in the laboratory, are: "What is it that makes us individuals?" That’s what we call thepersonal genome project. It’s aim is holistic, in contrast to the usual single disease or tissue.
The second is: how do we engineer biology? which can be called our "constructive biology" or "biological design" efforts. The two might intersect quite nicely in the form of personalized medicine. The first — what makes us who we are — could apply to all living things, but for now let's say humans. It's an analytic question: it can be addressed with genomic tools — technologies — which we develop.
The second question, the synthetic one, is how can we redesign living systems to achieve new goals. How can we evolve them in the laboratory, in subtle or in radical ways, to achieve biomedical, or agricultural, or other manufacturing goals. Those are the two big questions that we deal with.
The intersection is in personalized medicine is where you would ask, once you know who you are, what you would need to fix, or improve. In the world of engineering and commerce a few fields display exponential growth curves. Most fields of endeavor do not — e.g. ; steel and cotton have pretty bumpy but basically slow flat growth. But information technologies, computers, communication, DNA sequencing, synthesis — have exponential curves, in with nearly yearly doubling.
When I started on this as a graduate student I wanted everyone to have access to their DNA, but it would have cost many billions of dollars to even come close to being able to determine your genetic inheritance at a chemical level. Now that it's getting to the point where it's affordable, it becomes kind of an economic biomedical software exercise to figure out how to make that genome useful, as well. "Affordable" means in the range of 10 or 20 thousand dollars, which would be the cost of a week-long stay in an expensive hospital, or could be amortized over 80 years of your life, saving a hundred dollars here and there per year.
When you start thinking about that, the benefits that you need to dream up don't have to be that much. For example if knowing your genome would allow you to prioritize your colonoscopy or mammography a couple of years earlier or later, it could save you some money, or some quality of life. The objective is not necessarily to live longer, but to live better. If you do happen to live 160 years, then the technological goal and the societal goal interlinked with that would be to have at least 150 of those 160 years be productive and enjoyable.
The transition started to get serious commercially about the time the genome started flowing in. Even though the human genome isn't totally finished yet, around 2005 companies started saying "hey, that was a great stunt, but now we have to deliver it to people, and we can't use the same technology. It was way too expensive." and so some new technologies have started to appear. None of them are fully affordable yet, but they're a huge improvement over what we finished the genome product with.
They're called "next-generation" technologies. The National Institutes of Health has started a grant program whose goal is to bring the cost down to $100,000 by the year 2009 followed by a parallel grant to bring it down to $1,000 by 2014. A lot of people feel $100,000 is going to come way before 2009, and nobody's really certain when the $1,000 will come — and it's not even certain that you need to bring it down to a thousand. It's also not certain that you need to have the entire genome sequence, if you just did the most interesting one percent. We may not know the most interesting one percent yet, but we could probably take a very good guess that it's the conserved parts, the parts that code for proteins, or some combination of those.
This can play out in many different ways. Right now most of the commercial effort is in bringing out instruments that could be used for research, diagnostics, and so forth. There are some companies that provide services where you can send in your cheek swab, and they will either give you genealogical information about who you are, who your ancestors were, or medical information if you happen to have one of the small set of genetic diseases which are understood well enough that there's action that you can take. They tend not to give you information of any other sort than genealogy and actionable medical — they won't tell you about things for which there's no cure or preventative. Then you're returned a report, which is in fairly plain English, telling you what to do next: contact your physician, or some companies have physicians on staff, like DNA Direct which acts as kind of a broker between the companies that provide the tests, like Myriad, and a bunch of the gene counselors, medical counselors. This is the "analytic" personal genome level.
Now let's talk about the "synthetic" level — the idea of redesigning living systems. To say a living system is synthetic doesn't necessarily mean it's not natural. People draw different lines in the sand about natural. Typically, it means that you have manipulated it and engineered it in some way.
Almost everything was living at one point: the whole world has turned over and even petroleum was alive at one point. But it's synthetic in the sense that you've highly altered it. It could even be living, in the sense that there are now organisms that are patented, ranging from the first patent on bacteria that would deal with oil spills, to mice that are research tools in cancer. They have synthetic components: you've changed their genome in some way that makes them more valuable than a naturally occurring mouse, or bacterium, or a plant.
The idea of synthetic biology is an outgrowth related to what's called "genetic engineering," which is a term from the '70s and '80s. But genetic engineering was not something that engineers would recognize as an engineering discipline. Engineers want to have well-defined parts, interchangeable parts, specification sheets, systems plans, et cetera. That isn't typically the way genetic engineering was done. It was done more by trial and error — more of a craft than an art form, quite scientific, but not engineering. This synthetic biology is the engineering version, where you basically can draw analogies to large-scale integrated circuits in consumer electronics, then they'll be large-scale integrated genetic circuits that provide you with new ways of designing metabolic pathways to make drugs, to make biosensors, smart materials, hit cancers, and so on.
The new aspect is applying engineering disciplines — things like systems biology, which is also fairly new, and has new components. All these fields — almost every new field is a combination of two old fields. This is a kind of a union of engineering design principles that might be familiar to people in large-scale integrated circuits, combining that with genetic engineering, metabolic engineering, both of which are older — decades old, not ancient — and systems biology, which itself is a combination of feedback concepts, differential equations and so forth — those could be incorporated as well. There's also some bringing together of the chemistry and automation to make DNA — large highly accurate pieces of DNA — combining in concepts of laboratory evolution, which is relatively new. These things all meet together — kind of all these streams flowing together suddenly, all at once, into synthetic biology. Enough old things brought together into a new package that consitutes an invention, a new field.
I've been interested in math, computers, and biology and science as far back as I can remember. I was always trying to figure out ways that math and biology could go together. I did a lot of computer programming and independent chemistry and biology in high school, and then I completed college in two years at Duke University and went straight into graduate work where I continued research that I had started my sophomore ( final ) year of college.
I then proceeded to flunk out of the course work, but did OK at the research, publishing five papers in the same time-frame. As a consequence, I managed to go from flunking out of Duke to getting into Harvard graduate school. They were very accepting at Harvard, never even questioned the fact that I had flunked out. They knew it, but they didn't seem to worry about it.
Then I worked with Wally Gilbert. Part of the reason I wanted to join him was that he was developing a new DNA sequencing method and I had had this vision in my first graduate work, doing x-ray crystallography of the first folded nucleic acid, and typing in all the DNA and RNA sequences that were currently available (at the time it was a few thousand base pairs and today it is hundreds of billions).
At the time I thought, "gee, wouldn't it be neat to develop a new sequencing method". I learned molecular biology, kept doing computing on the side, and we developed a method called "genomic sequencing," starting around 1980. In a way the crystallography that I did was one of the first automated things in biology. No biologists really used computers for much of anything and I saw that as something that could be applied to other parts of biology; next was DNA sequencing and then protein analysis, then RNA analysis, and now synthetic biology.
The point is not just to bring the cost down; there's an intellectual challenge there of figuring out exactly what it is that humans are doing when they do something with craft. By teaching it to a computer, we get a deeper understanding of what it is that we're doing. That's one of the attractions for me of automation; that's why I did this whole series of automation.
Then I was in San Francisco for a little while, and helped start the genome project — that was in '84/'85. It didn't really get started until 1990, when I helped found three of the genome centers — one at MIT, one at Stanford, and one which was a company.
I started with the Department of Energy, of all things, in 1987, and with Wally we started a little company called Terabase — also around 1987 — and those two things were very intensely distressing to the National Institutes of Health. They said, we can't let the Department of Energy steal what could be the biggest prize in history by doing the genome project and we certainly don't want a company to start patenting all of our genes. That got them motivated in a way that nothing else could have and Jim Watson started raising money, went straight to Congress for it, and managed to get us the three billion dollars that we asked for.
It ramped up very quickly and we said we could do it in 15 years, which was a complete and utter projection — extrapolation beyond anything reasonable — it was a hundred thousand times more than anything that any of us had ever done before. Individually or collectively, really. How we had the chutzpah to do that is beyond me, but I was the youngest, so I was just following along the crumbs of my elders as they asked for (and got) vast sums of grant and private money, but I was leading in the technological sense. I didn't work with Jim Watson directly — he worked a lot with Wally and I've had interactions with him. He helped us get the money for the genome project.
I came back from my short post-doc at UCSF, San Francisco, and
became assistant professor at Harvard in 1986, and then developed
these other technologies to where we are now.
Today I am involved in a number of synthesis and sequencing endeavors. First, the BioFab group works together on ’constructive" biology, which has a number of tightly overlapping parts of a Venn diagram.
There's IGEM, "International Genetically Engineered Machines" group, which is now in its fourth year, and has 39 universities involved. It's a very interesting social phenomenon; it involves wiki's and a lot of undergraduates, 39 teams of 10 to 20 people each. It's amazingly intense and enjoyable — kind of like the robot competitions, or the DARPA Grand Challenges. They compete to make cool things during the summer, and some go year-round working on those cool things — engineering life.
Some of the people who started that group are also part of BioBrick Foundation, a non-profit, and a company called Codon Devices. So the founders of the field are defined by the intersection, or union, of those sets, depending how you look at it.
BioFab group is also a subset of the Codon Devices scientific advisory board. And that's a Cambridge company that does synthetic biology. We're distinct from IGEM and the BioBrick Foundation and other synthetic biology groups that are emerging.
To place where we are in the contemporary scheme of things, I can contrast our activities from those of other scientists whose work will be familiar to Edge readers such as Neil Gershenfeld and Craig Venter.
What we do differs from Craig's ocean project which is not technically synthetic biology — it's more the analytic side. It's the knowing who we are and what's out there. It's what you could call a fishing expedition, looking for stuff that's out there. You could consider that we could use that stuff in synthetic biology. But in practice, most of the synthetic biology groups that I work with and that are productive use parts that are far more well-characterized than the ones that you get in the fishing expeditions.
But that doesn't mean that's always going to be the case, and all of us are very open-minded to getting new parts by homology. In other words, you take a part that everyone trusts and you find one that looks like it, maybe in the ocean. Then, if you've set everything up right, it's easy to pop in the new ones; they're like the interchangeable parts that Colt invented for guns because you pop in all these different things and see if they're a little bit better. Or you can let them evolve: pop them in, give them a bunch of options, and see which one they choose.
Craig's company, Synthetic Genomics, is closer to synthetic biology than his ocean cruise, in that they probably will use more-characterized parts. Their ultimateal goal appears to be bio-energy, but initially they want to make a tiny living organism called Mycoplasma. I frankly don't see the connection between Mycoplasma and energy — it's not obviously that much easier to engineer Mycoplasma than more useful species, and in the end Mycoplasma is not biotechnologically significant, I think even by their own standards. But, it's good practice. The next thing they would like to do is to engineer organisms that are of practical significance for the energy production — I think the main thing they've talked about is making hydrogen.
Codon Devices is a basic enabling company that does biological engineering in general. They could enable human pharmaceuticals, agriculture, veterinary, energy, etc. Energy would not be their sole play; in fact, they may even work with Craig's Synthetic Genomics company in some way.
I think the big chemical companies would be less inclined to redefine themselves as BioFab companies. There was a little bit of redefinition that went along that happened a few years ago — decades ago — when companies like Dupont said that we're going to be getting more interested in biotechnology. But BioFab would tend to be younger, more nimble, academic and start-up companies. So it would be like Amyris Biotechnologies is engineering metabolism so that they can make pharmaceuticals, say, for example, anti-malarial drugs. That would be good example of a early BioFab success. Codon Devices, Synthetic Genomics; there really aren't that many companies that would embrace the BioFab vision right now.
In terms of customers and products, if it's a truly general BioFab lab, then they should represent all of the applications of biology in the same sense that a general software company like Microsoft, when it was a baby, took all comers that dealt with zeros and ones. Whether you wanted traffic data, or spread sheets, or word processing, games, they'd do it all, and they'd low-ball the price. Or, an electronic manufacturer might try to get every kind of electronics, from RAM to CPU to cell phones, if they could.
That's what I think Codon wants to do; they want to be the Intel or the Microsoft of biology, and not directly compete with all their customers. They wouldn't necessarily be an energy company, but they would work with an energy company. They wouldn't be a pharmaceutical company, but they'd work with one or more of those to build sensors, diagnostics, screens for drugs, new ways of making drugs, chiral-compounds, etc. All that stuff is enabled by synthetic biology.
Let's compare where we are in building life in terms of the recursiveness that happened in the electronics and software industries. In a way, we're back in the '50s; in a way we're at the turn of the 21st century; and in a way we're in the future relative to electronics.
It’s like we're in the '50s in the sense that we're just beginning to get comfortable with our parts the way in the '50s they were getting comfortable with transistors and capacitors and what-not. We're very primitive. But then, we're also kind of even with the electronics-computing industry in that we have some of the amazing recursiveness going already that they didn't get until recently i.e. computer-aided design (CAD), where you used computers to design electronics, and compilers to help make compilers.
We can leverage off of that right away because we can use computers to help us build biology and we can use biology to help us build biology. So in that way, we're already 21st century. So we're at 1950 or at 2006, but we're also way ahead of electronics in the sense that we can do evolution, which is something they really can't do.
BioFab can put a trillion designs objects in competition with one another; they're all slight variations on a theme, where we've engineered them to be as close as possible to our desired object, but we're humble, in a certain sense, and say, "we can't get it right perfectly; let's make ten to the 12th of them and see who wins." Imagine making a trillion laptops and seeing who wins: it's mind-boggling. We can do that with biology, and that's where we're way ahead of our electronic-Fab brethren manufacturers. And of course consumer electronics is way ahead of almost everything else. So to be ahead of them is quite an achievement. But it's very far from fully-tapped: the number of practitioners of lab evolution are very few.
This is different from what Chris Langton was doing 10 or 15 years ago at the Santa Fe Institute. What he was doing on artificial life is analogous to "artificial computers" : he was simulating on his abacus what a computer could do or even simulating on his computer what a computer could do. Maybe it's useful in an intellectual sense that you formalize and you can discuss and explore, but it isn't necessarily faster, it isn't necessarily cheaper, and it isn't necessarily faithful.
But with life, you know that life makes stuff that's useful: humans are useful, fabrics, pharmaceuticals, much of our building materials either were, are, or could be made by a living thing. Synthetic life is different from artificial life: artificial life is an attempt to simulate it under the possibility that a computer could be faster than a living thing.
There are knowledgeable people — I'm not sure exactly where I stand on this — who feel that something as complicated as a living system might actually be a very compact representation of itself. And if you want to do computing, it might actually make more sense to do computing by building a living thing than to simulate it. Or you can use one living thing to simulate another living thing. Et cetera. I'm very excited about building actual living things because you can make a trillion of them; it would be hard to build 10 to the 12th full-scale simulations of a cell on a computer, but I can build them in the lab in an afternoon.
The question — "who are we to make life?" — is great question. I've asked similar awkward questions about where biosecurity is going, and so forth. In terms of genetically modified organisms, I sympathize with the Europeans who wonder why take a risk when nothing's broken? I go to the supermarket, I buy organic foods; why fix that? There is a risk. It may be small, but why take it? Most Europeans are happy, , and they don't want to take that risk.
But it's a different economics, say, in a drought zone in the world. They need drought-resistant plants, and that can come from genetic engineering. Where they take a gene from any of 80 different species of resurrection plants and put it into a viable crop, and they haven't threatened the world. Everything a risks and tradeoffs, and in their world, that's a huge benefit, it will allow them to avoid some deaths. In Europe they may be quite correct that it may or may not cause death, but it isn't going to help their quality of life. That was a miscalculation on the part of the agricultural industry, at least in Europe; they may have calculated it correctly for America and Africa and so forth. And they may not have cared what the consequence was in Europe. I think they should have cared. More likely, they just miscalculated the impact socially and they could have tried to get more buy-in and so forth. They actually had some good ideas, like terminators. Who are we to continue to do things like that? Many things can be done entirely in a laboratory or factory, where you export only non-living products, where it's sealed in and it's safe and you don’t use pollen or keep the pollen from getting into allergic people's respiratory systems, which is one of the scary scenarios. It's like any new industry: you need to address both the psycho-social and technological safety issues. If you don't, you're not being a whole person — you're not being a whole scientist, you're just being a narrow scientist who says, "hey, I can make it; I will."
In terms of threats, lets put aside nanotech and robots for the moment, there are two things that are more realistic threats, which are computer viruses and bio-terrorism. Those already exist. Nanotech and robots don't exist, unless you call a computer a robot — they're not developed enough to be a threat.
Regular computers and regular pathogens — engineered versions — will probably continue to be a threat for quite awhile. The problem that Ray Kurzweil and Bill Joy and Martin Rees have all recognized is that we are enabling smaller and smaller efforts to have larger and larger leverage, of a negative as well as positive nature.
There is no perfect solution, but a partial solution is to have more surveillance of whatever you can to monitor, where the expertise is going, and where the materials required for practice are going. And to discourage any kind of negative use and encourage positive uses in every way you can as a top societal priority, not something you just give lip service to for a microsecond in some Congressional session. Whatever it takes.
For example, in IGEM — this little undergraduate competition we run every summer which now involves 39 universities world-wide — we could have said, this could cause bio-war or robot-war or bug-war or something. We chose not to go that route because that implants in the young mind that that's what biology and synthetic biology is about.
Instead, we have purposefully gone the route of encouraging cool designs that are productive and useful, and we have sessions on safety and responsible citizenship in all the IGEM sessions — for the teachers, for the students, for everything. That's an example of how you can tip the mindset collectively a little bit — it's not perfect.
I've put out proposals for how we do bio-security, where we monitor the flow, in the same sense that we monitor drug traffic. It might be even easier to monitor the flow of synthetic DNA all the way from chemicals, which are very distinctive and have their own signatures; instruments, which are distinctive; and DNA, which can only be ordered from a limited number of companies, etc. If you require licenses for them to interoperate, then anybody who operates outside of that is already revealing that they're up to something.
That's a particular kind of nitty-gritty implementation aspect of how you turn genetic engineering into a real engineering discipline. In a way it should have been the other way around: engineering should be in the title "synthetic biology," because the bio brick idea is that you have a part which is interoperable with other parts and they define bio-chemical and genetic elements that you can kind of plug and play.
You can figure each one of them has a little USB port so you can plug them into your chassis. The chassis might be a common bio-tech organism like E.coli. The chassis itself might be engineered to be really easy to plug in these bio bricks. Another analogy is to Legos: they have a defined spacing of knobs and holes that the knobs fit into — it's like almost any industry standard: the AC plugs, the light bulb screws, standard screws, et cetera. You have to have some standards so that they can plug and play without thinking too much, without handcrafting each part.
Bricks are a perfect example: you could build a house out of a branch here, a stone there, a piece of mud, some old plants, et cetera. Or you could say, hey, there are all these bricks of the same size, they're manufactured in a central facility, and they'd be lined up in parallel, in layers, staggered, with these specifications. This is a completely different mindset for building a house than taking all the trash in the neighborhood and piling it up against a tree. That's what the bio bricks is about.
Rather than randomly saying, "hey, this is cool, gee, let's go follow that butterfly for awhile," you say, "get a collection of all the parts we trust, let's put them in a catalog, let's give them names, numbers, a specification sheet for each, and how you plug it into another part."
Neil Gershenfeld at MIT's Fab Lab is interested in enabling students, and people in general, worldwide, to freely invent and make whatever, to develop an idea that is personal. You don't have to have a market of a million people; you can have a market of one. The tools they use in his Fab Lab are standard physical prototyping tools: milling machines, photo lithography, stamping machines, etc. And they take physics and chemistry and basically make it fairly high through-put, fairly high density if needed — high resolution around 20 to 1000 nanometer scale.
A BioFab Lab would take some of that, like the robots and so forth, but now they're working instead at the sub-nanometer-scale, via bio-nanotechnology scale. In other words, you can engineer individual atoms by changing individual base pairs in DNA and then, because you know the whole genetic code where a small number of atoms changes it from A to G, which changes the codon and a protein, which then changes its functionality, that of an enzyme, to manipulate additional atoms, you can basically engineer complex systems one atom at a time, with atomic precision.
Unlike the standard physical-chemical Fab Lab, your BioFab lab — your chassis — can make a copy of itself. Once you have a really great engineered biology system, you can make as many copies of it as you want: you could scale it up… (it does it itself; it's self-assembling). It's a dream of physical-chemical Fab Labs if they ever made, say, a milling machine that could make a copy of itself. That would be great. Then they'd have a self-replicating machine; that would be a milestone. Right now, in order to make all the parts in Neil's lab, you need to go to a bunch of other labs, scattered all around the world: one might make drill bits, one might purify copper from the dirt, et cetera. Not just a whole village, but almost the whole world is in a certain sense supplying that Fab Lab.
Where is it going? Are we playing God? And, who is playing? So far, the OpEd page editors and writers have been very nice to me, maybe because before I entered both the synthetic field and the personal genomics, I did something a little bit out-of-the-box in each case that was aimed at the ethical, legal and social aspects. In the personal genomics, I initiated involved in an effort to prepare us for new consenting mechanisms, and in synthetic biology, I have helped come up with practical ways of discouraging and monitoring potential bio-terrorist use. Neither of those were totally obvious, but then once you state them they seem useful.
A remarkable one-third of the Synthetic Biology 2.0 meeting in May, the entire Sloan foundation study, and various Wikis have been dedicated to inclusion of a broad international community of ethicists, anthropologists, legal, government, and citizen groups to see what should be done and how. This has already seen concrete action in the form of an international commercial consortium of DNA construction companies to help establish a system of open surveillance that could form a working template for community economic and international treaty. Other suggestions are welcomed.
Once again, where is it all going? I could paint slightly positive and slightly negative ways that it could be going. The most positive is that we should be able to manufacture whatever we want, possibly on our desktop. Biological organisms can make just about anything, but they usually make it under their own program, which has been selected for reproduction, not for arbitrary human production tasks.
Computers can program Fab Labs — physical-chemical Fab Labs — to do things. Computers can program biological systems, either by programming their DNA or by influencing their real-time environments, but getting full general BioFab capabilities is just where we're getting, and that would be the sort of vision statement that might be worthy of an editorial or some discussion somewhere.
What is the path from here to general fabrication? Is that creating too much power in the hands of the individual to be able to create whatever they want that is physical. (Right now they can create any software they want, pretty much, and that has risks; some people estimate up to a trillion dollars a year is lost; lost in some way due to hackers and viruses and spam and whatnot). We have been greatly empowered computationally, without much discussion in advance of whether we should or shouldn't. There's been considerably more discussion in advance about whether we should do recombinant DNA or gene therapy or genetically modified organisms.
We're making it easier for people to make anything. They can make good things, they can make bad things, and if we're going there, we're going there very fast, at alarming exponential rates that most people are not competent at projecting exponentials. Maybe people like Ray Kurzweil and you and me get it, but, even for us, there's not really an engineering discipline of exponential technology.
I think the Internet could have been done better. It's easy to say with 20/20 hindsight, but you could have made it either much more 1984-ish, where some benevolent or non-benevolent dictator is watching it, or it could at least have been made so that you know who's at the other end of the line so when somebody sends you email it can't be just some spoofer. That could have been engineered from the beginning. It would have been hard, but it could have been done, and it wasn't. And we're not going to fix that in this conversation.
We're acting as engineers, possibly as intelligent designers. The religiously-inclined would not put humans in the same league with the "Intelligent Designer", or God. As creative as we become, and as industrious and as good as we are at designing and manufacturing living things, which we've been doing since the stone age — no matter how good we get at that, it's like calling a candle a supernova. A candle is not a super nova; it's not even in the same league. And we, as intelligent designers, are not in the same league as the "Intelligent Design" forces that started the whole shebang. We're not designing sub-atomic particles from scratch; we're not designing galaxies. We're really not even designing the basic idea of life; we're just manipulating it.
We seem to be "designed" by nature to be good designers. In that sense we're part of some huge recursive design, but we're not doing something we're not designed (and microevolved) to do. Engineering is one of the main things that humans do well. This is just the same as making flints and rolling rocks and wheels and fire and all the rest. It's just what we do and it’s natural. But we may also be exceptionally good at free-will, emotional and social intelligence, allowing us to overcome parts of our nature to enhance our long-term survival, quality of life, and transcendent ideas.
Are we learning to manipulate life or is life learning to manipulate us?
A plausible scenario for how we arrived at life as we know it is that primitive organisms were infected by self-replicating parasites, learned to adopt those self-replicating processes, and became eukaryotic cells. Now, our still-primitive life-forms have again been invaded by self-replicating parasites (a network of code-consuming and code-spewing microprocessors) and life will, once again, adopt these self-replicating processes, on its own terms, for its own ends (with our help). Life (and evolution) as we know it will never be the same.