Soul of a Molecular Machine [1]


I have several kinds of questions I’m asking. First of all, they’re questions related to my own work on the ribosome. As you know, the ribosome is this large molecular machine that reads our genes and makes proteins. One of the big breakthroughs about fifteen years ago was seeing the atomic details of this machine for the first time. Fifty years after it was discovered, people finally had a first glimpse at what the ribosome looks like in terms where its atoms are and its chemistry. Over the last ten or fifteen years, this led to lots of work on how the ribosome might work.

Now there are new sets of questions. How are ribosomes assembled and how are they regulated? Here is this complicated machine, how does a cell put it together and make new ribosomes? The ribosome actually makes part of itself, which is to say that a ribosome makes proteins while also being made up of lots of proteins. It’s one of these partially self-assembling machines, but it also requires other things.

Another thing is how the cell regulates it: Sometimes the cell wants to suddenly start making proteins, suddenly turn things off, and it does this by regulating the ribosome at different steps. If it's not regulated properly, it can lead to diseases like cancer. In fact, many genetic diseases in mitochondrial ribosomes map to defects in ribosomes. Viruses like hepatitis C and other viruses can also hijack the ribosome to stop making the cell’s proteins and make its own proteins. How do these viruses hijack the translational machinery? These are all technical questions that I’m grappling with, which you can think of as natural extensions of past work on the structure of the ribosome.

It's ironic that I became well known for determining some of the first atomic structures of large pieces of the ribosome, ribosomal subunits and then the whole ribosome, using a technique that’s about 100 years old, which is X-ray crystallography. Even the macromolecular version of it, which was done by Max Perutz and John Kendrew at the famous MRC Laboratory of Molecular Biology—in Cambridge, where I work—was done in the early 50s out to the early 60s. That’s also fifty years old, at least forty years old, when we cracked the ribosome structures. The irony is that I became known for using this fairly established technique, but stretching it to its limits to solve something that was a million atoms. But no one would solve the ribosome that way today.

At the same lab where I work, which pioneered X-ray crystallography of proteins, there’s another technique that’s also being developed—it's being developed worldwide, but my lab has spearheaded a lot of the advances—which is looking at it by electron microscopy, without any crystals. This requires a thousand times less material, and doesn’t require crystals or even purity of the sample. It’s allowing us to get snapshots of the ribosome in all kinds of states, and to get ribosomes from humans, from mitochondria, and all sorts of things.

If someone were to describe what I became well known for, it would be like describing someone who developed the Betamax. There’s a famous book on computers called Soul of a New Machine by Tracy Kidder, which is about a Data General computer that was going to be a step above anything that existed at the time. Ironically, that computer never really took off. A few years after it came out, it was superseded by the VAX series of computers. This is a little bit like that: just 15 years after the first ribosome structures were cracked by crystallography, now no one would use crystallography to do it. I'm not going to give back the Nobel, of course, because the Nobel was not for using crystallography, it was for the discovery of the atomic structure of the ribosome and the functional implications of it.

There are broader aspects that I think about now, partly because a year ago I became president of the Royal Society. That has led me to think about science in a broader context. There are a few things that I worry about, one of which is that science has always succeeded because it's evidence-based, which has led to public trust. The public believes that when scientists say something, it's based on hard evidence that they've looked at critically. More importantly, when one scientist claims something based on evidence, other scientists—his/her competitors—check it out carefully because they don't want to let someone get away with something if it isn't sound. That's led to an enormous trust in scientists.

If you look at public opinion polls, scientists are among the most trusted professions, certainly they are in the UK and probably in the US as well. But we're getting to a stage where that's at risk for a variety of reasons. Some of them are technical reasons and some of them are cultural reasons. I'll get to the technical part first.

We're now accumulating data at an incredible rate. I mentioned electron microscopy to study the ribosome—each experiment generates several terabytes of data, which is then massaged, analyzed, and reduced, and finally you get a structure. At least in this data analysis, we believe we know what's happening. We know what the programs are doing, we know what the algorithms are, we know how they come up with the result, and so we feel that intellectually we understand the result. What is now happening in a lot of fields is that you have machine learning, where computers are essentially taught to recognize patterns with deep neural networks. They're formulating rules based on patterns. There are are statistical algorithms that allow them to give weights to various things, and eventually they come up with conclusions.

When they come up with these conclusions, we have no idea how; we just know the general process. If there's a relationship, we don't understand that relationship in the same way that we would if we came up with it ourselves or came up with it based on an intellectual algorithm. So we're in a situation where we're asking, how do we understand results that come from this analysis? This is going to happen more and more as datasets get bigger, as we have genome-wide studies, population studies, and all sorts of things.

There are so many large-scale problems dependent on large datasets that we're getting more divorced from the data. There's this intermediary doing the analysis for us. To me, that is a change in our way of understanding it. When someone asks how we know, we say that the system analyzed it and came up with these relationships—maybe it means this or maybe it means that. That is philosophically slightly different from the way we've been doing it.

The other reason to worry is a cultural reason. The Internet and the World Wide Web have been a tremendous boon to scientists. It's made communication far easier among scientists. It's in many ways leveled the playing field.

I remember when I grew up in India, if you wanted to get a book, it would show up six months or a year after it had already come out in the West, sometimes two years. Journals would arrive by surface mail a few months later. I didn't have to deal with it because I left India when I was nineteen, but I know Indian scientists had to deal with it. Today, they have access to information at the click of a button. More importantly, they have access to lectures. They can listen to Richard Feynman. That would have been a dream of mine when I was growing up. They can just watch Richard Feynman on the Web. That's a big leveling in the field.

Along with the benefits, what has happened is a huge amount of noise. You have all of these people spouting pseudoscientific jargon and pushing their own ideas as if they were science. They couch all their stuff in technical jargon. They talk about energy and negative energy. Well, what does negative energy mean? Energy has a very precise definition to a chemist or a physicist. These guys are using it in some mumbo-jumbo way, but it sounds scientific. Scientists are very busy, and our science has become so technical that it's a real effort to communicate it in an accessible way to the public. The public is bombarded with all this information, so who do we believe?

The reality is, even in science there are lots of experiments that are wrong. For example, there was a paper published on the MMR vaccine, which has been widely discounted by studies. I remember when I was in the US, there was this big thing about electromagnetic radiation and these high-voltage lines causing cancer. Of course, when people studied it further, the effect just went away. As soon as they gathered enough data, they found there was no effect.

The first dramatic study always gets a lot of press. The subsequent studies that clean it up and show that there isn't a problem don't get the press. The public then has a skewed view of what is scientific and what isn't. They'll say, "Well, that was published in a journal, too." But you have to consider the bulk of the evidence, not just one outlier paper. This is becoming harder and harder for the average non-scientists, or even the average scientist outside of their own field.

How do we as a science community grapple with this and communicate to the public a sense of what science is about, what is reliable in science, what is uncertain in science, and what is just plain wrong in science? How do we live with uncertainty? Scientists live with uncertainty. We know that no matter how confident we are in our theories, it is possible that we're wrong, that our ideas may be wrong, and we always have to be prepared for that. That isn't to say that our ideas lack merit and that they shouldn't be taken seriously.

This is a problem in many fields. Climate change, for example, is a classic field where uncertainties in the consensus opinion are pounced on by people who don't like the idea of climate change and therefore oppose it. These are real long-term issues that we need to grapple with.

~ ~ ~ ~ 

I do admire what machine learning has accomplished. If you had told me computers would be doing some of the amazing things that they do now—beating the World Go Champion, for example—that’s incredible and I would never have predicted it. But going from there to the general hype about developing a general intelligence machine that will think like a human and develop consciousness still to me smacks of science fiction.

Partly, it's because we don't understand the brain at that level of detail. Take a simple question: How do we remember a phone number? It seems like a very simple question, but there are all sorts of things to consider in relation to that question. How do we store a number? How do we know it is a number? How do we associate it with a person, a name for that person? How do we recall it and associate all the different characteristics that go with that number? That's an amazing problem that has everything from high-level cognition and memory and recall, to how a cell stores information, to how neurons interact.

These guys are underestimating the billions of years that eventually resulted in the human brain. Each of these fields needs to make more progress before they can agree on a common framework of attack. We're going to see machines do all sorts of interesting things, from driverless cars, getting patterns out of large datasets, playing games very intelligently, and so on. But it's not going to be the same sort of thing.

We tend to be anthropomorphic. When we see machines doing things that we used to think only humans could do, like play chess or play Go, we suddenly make that leap into this science fiction realm that they're going to take over. They're going to do useful things, tedious things that we really don't want to do or incredibly complex tasks that they're suitable for, but they're not going to be a replacement for human thought and human vision.

We need to progress along with machine learning. It's a very exciting field. If I were twenty years old now, that's a field that would seriously interest me. I'm not so concerned about these android or robotic scenarios where the computers take over.

The machine-learning and deep-learning crowd that is working on making computers do more advanced things with a view of developing some sort of artificial intelligence—however you care to define that—is really zooming along, especially as a result of all these machine-learning algorithms. At the same time, neurobiology has really taken off. All sorts of tools have been developed to watch which neurons are firing and genetically manipulate them and see what's happening in real time with inputs. There have been a lot of advances in molecular biology and neurobiology. There's a big neuroscience initiative, almost like the moon landing initiative, to see if can we crack this hard problem.

Both of these fields need to progress much more and then need to talk to each other. I have to say, I'm more in line with Daniel Dennett's view, which is to say that we don't understand the complexity of the evolved human brain enough and how amazingly general it is. We had to anticipate all sorts of unexpected things in order to survive, and that's what it is evolved for.  

We tend to be very anthropomorphic. But if we step back and look at life and what makes life tick, humans are one species and we're having a big effect on the planet. But if we're going to be taken over at some point, it will be by things that will always be thriving and existing on earth, like bacteria. Bacteria can live in anything from the Arctic to vents that are over 100 degrees Celsius in acid environments, acid that would melt you or me. We have to put it in a broader context when we ask where we're headed. We don't live in a vacuum.

 ~ ~ ~ ~

When I graduated from physics in India, I was bent on becoming a theoretical physicist, but I was only nineteen and I hadn't taken the GREs. I applied to a bunch of schools, and the school that would accept me and give me a fellowship without a GRE was Ohio University. It was a decent university, but it's not one of the big research centers in the US. It's in a small town in southeastern Ohio. I settled down there and started doing physics, but it became very clear to me that if I continued in physics, I'd end up doing a bunch of boring calculations that really wouldn't advance anything very much.

At the same time, biology looked to me like it was progressing by leaps and bounds. Every issue of Scientific American seemed to have a major discovery. It also seemed that these discoveries weren't by big genius types; they were smart people doing good science, and they would be making fundamental contributions. I knew several famous physicists that had gone in and made a big impact in biology, like Francis Crick or Max Delbrück, and I thought that maybe I should switch.

I started off in graduate school again after a PhD in physics, and I was even taking undergraduate courses because I didn't know any biology. I was in a class full of premeds who were worried about whether they were getting a 98 or 97 on their exams. I was just there to learn, so it was a strange experience. That was at the University of California in San Diego. Then I saw an article in Scientific American on getting at the structure of the ribosome.

Everybody's heard of DNA, but nobody's heard of the ribosome. It's the strangest thing that, even today, very few non-biologists have heard of the ribosome. Yet, it's a much older molecule than DNA. It's a molecule that makes almost everything in the cell. Either it makes it directly or it makes the molecules that make the other molecules. In a sense, it's the mother of all molecules, and it came out of an ancient RNA world before there was probably a genetic code, let alone DNA.

I had learned all this when I was in my second attempt at graduate school. So when there was this article in Scientific American about chipping away at the ribosome, I wrote to these professors at Yale, Don Engelman and Peter Moore. I ended up working for Peter Moore, who was at the ribosome end of that duo. They were using a physical technique called neutron scattering to look at the ribosome.

You might ask, "Why bother looking at it?" Well, if you look at the ribosome, it's this enormous molecule. It has about a million atoms and here it is reading the genetic message that has been copied from DNA, which is a double-stranded molecule, to a single-stranded molecule called messenger RNA.

For each gene a section of DNA is copied to make this messenger RNA, and then the ribosome basically takes this genetic information and reads through it like a ticker tape. It's reading the information, and based on that information it's putting together a protein chain with exactly the right order of amino acids, because each three bases on the DNA or the RNA is specifying a particular amino acid. It has to know when to start and when to stop, and the cell regulates it.

So how does it all work? People realized they were going to hit a brick wall, and many of the early pioneers in the field just left the field. Jim Watson, who was Peter Moore's PhD advisor, stopped working on ribosomes. Eventually he stopped doing his own science and became a director of Cold Spring Harbor. Other people who were in the field also left. But a small group of people persisted, and the reason they persisted is they wanted to understand how it works.

You can't understand how a large machine works if you have no idea what it even looks like. It's like understanding a car without having any idea of how it's put together and how the engine connects with the pistons, and the crankshaft, and the gearbox, and the wheels, and the steering. It's complicated. Otherwise, all you know is that you put in gasoline and out comes carbon dioxide and water, and somehow the thing moves. That's not an understanding of a car.

I went to Peter Moore's lab, and we were trying to map where these things were. There were two problems we ran into. One was that the level of detail wasn't at the chemistry level, where you could figure out how the ribosome did all these things like recognizing the code, joining up the amino acids into a protein, and moving. The second problem was that it was focusing on proteins. The ribosome is a weird beast. It makes every protein in the cell, and every form of life, and yet itself is made of lots of proteins.

How did the ribosome get started? The first clue to this came from Francis Crick, who said maybe early ribosomes were made up entirely of just RNA, and then proteins got made and some of the protein stuck to itself, and it evolved and became this bigger protein RNA machine, but maybe it started off as just an RNA machine. That idea is probably correct. One of the things that came out of the structure was that the important functional parts of the ribosome were made up entirely of RNA. You can think of the ribosome as having a fundamental ancient core consisting just of RNA, and then these proteins that the ribosome made got added.

Of course, we were trying to map where all the proteins were. In hindsight, it turns out that the action is in the RNA core. Even if we had had a high-resolution structure of those proteins and an approximate location, it wouldn't have told us how the ribosome worked. To do that, you needed a structure of the whole thing. That was thought to be almost impossible because it was much bigger than anything that had been solved by crystallography. Then, a breakthrough was made in Germany by Ada Yonath and Heinz-Günter Wittmann, who reported the first real crystals of any subunit of the ribosomes.

The ribosome has two halves that ratchet to read through this ticker tape of messenger RNA and move along it while they're making a protein. You can think of it as a ratcheting machine or a ratcheting ticker tape reader. They had produced crystals of the large subunit, but they weren't of the quality that could give you an atomic structure, even if the technology had been around.

Then, a group in Russia who worked with a different organism, an even higher thermophile that was discovered in the hot springs of Japan's Izu Peninsula called Thermus thermophilus, realized the ribosomes from that organism could crystallize, and they got crystals of the small subunit as well as the whole ribosome. But none of these crystals diffracted very well.

For a long time, people kept at it. A second breakthrough came when Ada Yonath's group obtained crystals of the large subunit that did go to sufficiently high resolution that if you were able to solve it, you would be able to build an atomic model. That means crystals that were sufficiently well ordered, that is, all the molecules sat in very close to the same orientation so that when you averaged them all out, there wasn't too much blurring, and you could get high resolution. That was a milestone. After that, there was almost no progress towards getting structure, and people in the field got frustrated.

I had done a sabbatical to learn crystallography. I was at Brookhaven doing neutron scattering and realized that neutron scattering wasn't ever going to be a powerful general technique in biology, so I almost had to change fields again, at least change techniques. I went away on sabbatical to Cambridge to learn crystallography. When I came back, I had an idea that using synchrotrons and the fact that you could fine-tune the wavelengths of X-rays in the synchrotron around the properties of certain special atoms, you could extract enough signal, even from something as large as a ribosome, to solve it.

I didn't want to work on the large subunit because I felt the field would jump on me if I took Ada's crystals and started working with it. I started focusing on the small subunit, for which at the time there weren't any good crystals, as far as I knew. But because she had been at it for a while and gave talks at a couple of meetings—one in Victoria in 1995 and one in Seattle in 1996—the field had come to the conclusion that she was not making progress. They felt that she had reached a block and that others would have to come in with new ideas if the field was going to move forward.

Even though I didn't want to go head-to-head with her, others like Peter Moore and Tom Steitz felt that if she wasn't going to do it, somebody else should. They took those crystals as a starting point and tried to figure out how to phase it, which is how to get the information from the X-ray diffraction data to be able to solve the structure. I thought it was going to be this race between them and Ada for the large subunit, and I would have the small subunit to myself. The small subunit was a starting point. I thought if I could do that, it would be a good time to then attack the whole ribosome.

It turned out to be this mad race, because at some point Ada—maybe she felt she was losing ground to the Yale group—switched her attention to the small subunit. Instead of those two being in a race with each other, I found myself in a race with Ada for the structure of the small subunit. In the end we ended up with a more complete high-resolution structure, but the structures of the small subunit from Ada's group and mine, and that of the large subunit from Yale, were all published within a month of each other in 2000. That was a big breakthrough.

A year later, a Russian duo—a husband and wife couple named Marat Yusupov and Gulnara Yusupova—went to Harry Noller's lab where they reproduced the crystals of the whole ribosome. Of course, they couldn't have solved it to atomic resolution at that time because those crystals just weren't good enough. But because they had the structure of both subunits from the Yale group and from us, they were able to slot them in and arrive at a molecular structure of the whole ribosome. Then we had a whole ribosome quasi-molecular structure.                                 

Then the field moved on. We had to get accurate structures of the whole ribosome and the ribosome doing different things at the starting point, in the act of accepting a tRNA, in the act of moving through the mRNA, and in the act of termination, which means to recognize when it's reached a stop, and then a special protein comes in, recognizes the end, and chops off this newly made protein so it can liberate itself from the ribosome and go off and do its thing. All of those things took lots and lots of time. Each of those was a multiyear project that went from 2000 until a few years ago.

About four years ago a new technique came online: single-particle electron microscopy. This is a technique of getting three‑dimensional structures by just looking at unoriented particles, randomly oriented particles, in the electron microscope. The technique had been around for quite a long time. It had been used for viruses in the 1970s, at the lab where I work and Tony Crowther and Aaron Klug, who used to be the director and also got a Nobel Prize. Interestingly, he was also president of the Royal Society once.

They had developed this technique, but it wasn't used for particles without symmetry. Viruses have a lot of symmetry, and the signal-to-noise problem is a lot easier for viruses. People like Joachim Frank, who's now at Columbia, and Marin van Heel, who's in Holland, developed techniques for using this method even with particles without any symmetry, like the ribosome. But the resolution they could get was pretty limited.

We used to scathingly refer to them as blobologists, because they just saw a bunch of blobs. There's no way you could deduce an atomic structure from scratch. Of course, once we had solved the structure by crystallography, they could put the atomic structure into their blobs and they could say, "This blob is this protein, and this blob is this piece of RNA." But that's not like solving a high-resolution structure from scratch.

But a number of things changed. One of them was that Richard Henderson, who used to be the director of my lab and who hired me, realized, even in the 90s, that it was going to be possible to get on atomic structure just by using electron microscopy without crystals. He realized that the problem was, a) the microscopes of the time weren't good enough, and b) the detectors were really not good enough and they were too slow. He spent a lot of time working by himself and in conjunction with other labs and commercial companies to develop better microscopes and better detectors.

Then there are people who developed better algorithms. You mentioned Bayesian algorithms for machine learning. Bayesian algorithms are extremely useful for analyzing this very noisy data from electron micrographs. If you looked at one of these pictures and you looked at what a ribosome looked like, it is so noisy. It is still amazing to me that you could get an atomic structure from something like that, and yet you can. All of these techniques, both the hardware and software, kept on improving. And then a few years ago it made an amazing breakthrough.

Today, you can get high-resolution structures of the ribosome with no crystals at all. The thing that everybody thought was a tour de force and got us our first ribosome structures, nobody would do it that way today. That has opened up all kinds of possibilities not just for the ribosome, but for all sorts of processes in the cell. We can now look at very small amounts of a sample. It doesn't even have to be pure; it could be a mixture of states. The beauty is that you would capture all of those states at once, and you would be able to visualize and get molecular structures for all of the states from a single sample. To a structural biologist, this is a dream. In the old days, you'd have to carefully figure out how to trap one particular state, make sure it was stable, purify it, try to crystallize it, and pray that it crystallized well enough. That would take years and years. Now if you're clever, you can trap a series of states possibly from a single experiment, or a few related experiments.

The ribosome is pretty abundant, and it's also easy to solve by this method because it has high contrast. If you look at fundamental processes in the cell, almost every one of them is done by a complex machine. You look at how DNA replicates during cell division, that's at the heart of biology. That's done by a huge machine. The DNA polymerase complex is this enormous machine that is very dynamic. How do you capture all of these states on this machine and figure out how it works? In higher organisms, it's even more complicated.

The same thing with RNA polymerase. How does RNA polymerase work, how does it interact with factors that tell it when to switch on genes and when not to? That's an enormous complicated problem. Once you've made proteins, you also want to degrade them. You don't want proteins to accumulate. If proteins are defective, you want to destroy them. That's all done by a very complicated set of machines like the proteasome. Everything in the cell is done by large complex structures.

We're at the threshold of a new age of structural biology, where these things that everybody thought were too difficult and would take decades and decades, are all cracking. Now we're coming to pieces of the cell. The real advance is that you're going to be able to look at all these machines and large molecular complexes inside the cell. It will tell you detailed molecular organization of the cell. That's going to be a big leap, to go from molecules to cells and how cells work.

In almost every disease, there's a fundamental process that's causing the disease, either a breakdown of a process, or a hijacking of a process, or a deregulation of a process. Understanding these processes in the cell in molecular terms will give us all kinds of ways to treat disease. They'll give us new targets for drugs. They'll give us genetic understanding. The impact on medicine is going to be quite profound over the long-term.

The two big moments of discovery in terms of the ribosome were, one, the discovery of the ribosome itself. This understanding that stitching together amino acids to make up a protein doesn't just happen by itself. It's not like amino acids suddenly recognized triplets of DNA or RNA and then somehow linked themselves up together. It takes this enormous machine to do it, a machine that's two and a half million Daltons in bacteria and almost twice that in higher organisms, this million‑atom machine that uses up energy all the time to do this process. It does it amazingly accurately.

Even today, the best protein synthesizers that we have in the lab have nowhere near the speed and accuracy of the ribosome. It's an amazing machine. That was one big point, and that happened in the '50s.

The other was being able to see the atomic structures of the ribosome. That happened from 2000 and the following few years. You could see what this machine looked like in atomic terms. That allowed you to ask questions in terms of chemistry. How does this machine work as a chemical machine?

The double helix has a certain advantage. I like to say the ribosome predates genetic information on DNA. It's much older than DNA, and it's responsible for the synthesis of almost everything in the cell. There's no question that the ribosome, in some sense, is the mother or grandmother of all molecules in the cell. From that point of view, the ribosome is absolutely fundamental to understanding how life came to be what it is today, and it still plays a fundamental role in biology.

Having said that, DNA has a certain feature that makes it universal. First of all, it is the heart of heredity. We have wondered about heredity ever since we've probably been humans. How did we come to be? Why do we look like our parents? How come humans give birth to humans, and dogs give birth to dogs? This is a profound problem. The thing about DNA is that it was the first example of a biological molecule storing information, and that information was genetic information—our heredity.

And it turned out that the molecule was a very simple molecule. It was a double helix, and the information was stored as a string of letters along the strand of the helix, each of the strands. They were complementary, so each strand contained the information needed to make the other strand. In one stroke, there was a molecular solution to this centuries-old problem of heredity and genes. That is the reason for the universal appeal of DNA.

That's only partly due to the two protagonists who were very colorful and brilliant characters, and controversial as well, which added to the spice. But it goes beyond that. DNA addresses something that we as humans have wondered about for centuries.

~ ~ ~ ~ 

Unexpectedly, I was asked if I would consider becoming president of the Royal Society. This came as a big surprise to me. I had only lived in England for about fifteen years at the time, and had come here relatively late in life. The other reason I was a bit surprised by it was that throughout my life, I'd been essentially a very focused laboratory scientist. I had not been one of these people with wide networks and well known in the public sphere. But it was a great honor, which I felt was something I couldn't turn down and which posed some interesting challenges. I said yes.

The last year has been very interesting because, for the first time, I've been taken out of my little area of ribosomes and structural biology into thinking about broader issues about science. How do we communicate science? How do we ensure that science is reliable? How do we promote interaction among scientists? How do we constantly make the case of why science is important not just to government, but to the public and to others? It's been a very interesting experience because it's made me think about science in a much broader context.

There's been a fringe benefit, and that is that I've met all kinds of interesting people that I would never have met if I'd stayed in my little lab in Cambridge. At the same time, I still have my team in Cambridge plugging away at the next problems on the ribosomes. It seems almost like the best of both worlds to me now. You'll have to ask me in five years how I felt about it.