WHY WE'RE DIFFERENT
The field I do research in is called behavioral genetics, which means the genetics of behavior, just like medical genetics means the genetics of medicine. I'm a behavioral scientist, so that's why I study it. But it also has some interesting implications from a larger science point of view. We study things like reading disability and schizophrenia. These are among the most complex traits that can be studied, but they're also very important. You don't have to explain to someone why you're trying to understand the origins of reading disabilities or schizophrenia, any of these things we study. It's not as arcane as some fields.
People understand heredity. When we talk about heredity, we're talking about eye color, hair color, height, those differences among us that are caused by DNA differences we inherit at the moment of conception. Behavioral genetics uses genetics to understand behavior. That's different from what a biologist would do, or a geneticist.
What I'm excited about now is the impact of the DNA revolution on the behavioral sciences and on society. It's an endgame for me, in terms of forty years of my research looking at genetic influences in the behavioral sciences. It's good to look at this in the perspective of forty years, and it's personal to me because it's been my journey. It might be hard for people to believe this, but forty years ago it was dangerous to talk about genetic influence in psychology.
As a graduate student at the University of Texas at Austin, my first meeting was the Eastern Psychological Association meeting in Boston. There I was, a naïve graduate student, at this meeting where I was presenting some work on behavioral genetics, which, at that time, was a twin study of personality development in children. I went to this plenary session with 3000 psychologists. Leon Kamin, who was president at that time, was giving his presidential address. He was thundering on about these wicked people daring to study genetics in psychology when we "know" that genetics doesn't have anything to do with it; it's all environmental. That was a real shocker to me because it was a rabble-rousing meeting, and the rabble was getting roused. I was the only behavioral geneticist in the audience.
That was my introduction to how political some of this was at the time and how much antipathy there was toward genetics. Things have changed quite a bit over the years. Forty years ago, the task was just to get people to consider the possibility that genetics might be important. Schizophrenia, for example, was mother blaming. The view back then was that it was caused by what your mother did to you in the first few years of life.
Twenty years ago I decided that we didn't need much more research demonstrating genetic influence. Twin studies and adoption studies made a solid case that just about everything showed genetic influence. We're talking about individual differences in behavior and the extent to which genetic factors explain these differences. Genetic influences aren't just significant; they're substantial. At that time—twenty years ago—I began to feel that some people would never believe this. But now we are starting to show genetic influence on individual differences using DNA. DNA is a game changer; it's a lot harder to argue with DNA than it is with a twin study or an adoption study.
In the early 2000s there was a collective feeling that the thing that would advance the field of genetics the most would be to find some of the genes responsible for this ubiquitous heritability in the behavioral sciences. The problem was that this idea was ahead of the technology at that time. We could get DNA, in our case, from cheek swabs. We now just get it from saliva. You could get DNA and, at great expense, could genotype a few genes using candidate-gene approaches. Dopamine genes and serotonin genes are so important as neurotransmitter systems that you'd think DNA variation in those genes might make a difference. Well, that didn't pan out. There have been probably thousands of studies with these few handfuls of candidate genes.
People started realizing ten years ago that we needed to take a systematic approach across the 3 billion base pairs of DNA, an atheoretical approach. Rather than assuming that a couple of genes were important, we needed to look at it systematically across the genome. But how could you do that? This is where it became technology led.
About ten years ago there were two advances that made all the difference. One of the biggest advances involved what we call chips, which are DNA arrays, a little plate the size of a postage stamp that can genotype hundreds of thousands of DNA variants. These variants are spread out evenly across the genome, so that with this one chip—which then costed several thousands of dollars and now costs less than a hundred dollars—we could genotype a systematic array of DNA variants across the whole genome. This made it possible to do systematic genome-wide association studies, which has taken over the life sciences now. People don't do these little candidate-gene studies anymore.
It's been a revolution in the last few years. Hundreds of these genome-wide associations have been done throughout medicine, biology, and the behavioral sciences. They're beginning to pay off. Most of the successes are in biological science and medical science, where there's a lot more money to do big-time research.
In 2013 there was an influential paper published in Science that was a genome-wide association study on what might seem like an odd variable—years of education. Nearly every genome-wide association study includes years of education as a demographic measure. You just describe your sample that way. It's a lousy variable in some ways, but socially it's an important variable. The point is that you can get large samples. The reason you need to do that has to do with what we learned very early on, which is that there are no genes of major effect. Everything is heritable, but there're no traits in the behavioral sciences where there are just one or two genes involved. There are thousands of single-gene traits, like phenylketonuria, for example, which is a single-gene recessive disorder that causes severe retardation if untreated. But that's very rare—one in 20,000 people in the world. That doesn't enter into the heritability, say, of cognitive abilities and learning abilities because there are so few people with that disorder. Learning abilities and cognitive abilities are substantially heritable, but the early genome-wide association studies—including mine—which were powered to detect genes that accounted for, say, 3 to 4 percent of the variance, consistently came up empty-handed. It began to dawn on people that we were looking at many genes of very small effect size. And that meant that we were going to need very large samples to detect them.
This Science paper in 2013 came up with just three hits—DNA variants—in different bits of the genome that were genome-wide significant. When you have hundreds of thousands of DNA variants, you have to correct for multiple testing, massively. This isn't a probability value of 0.05; it's 0.0000005. You have to have a big sample to detect these effects as significant. These hits were of genome-wide significance, and they replicated in independent samples.
The largest effect of those three hits in the Science paper was 0.02 percent of the variance. That's 0.0002, an incredibly tiny effect. However, when they summed up these effects over all the variants, they explained almost 2 percent of the variance of years of education. Years of education is around 50 percent heritable. This means that, of the differences among people in their years of schooling, about half of those differences can be explained with genetics. In the genome-wide association study, with 120,000 people, we were detecting effects that accounted for 2 percent of the variance in independent samples. That's 2 percent, whereas the heritability is 50 percent. There's a big gap there, but explaining 2 percent of the variance was a start.
Those results were used in over 100 studies to begin to create a genetic score that could predict years of education. It turns out it explains more variance in cognitive ability than it does in years of education. It explains about 3 percent of the variance in general cognitive ability, otherwise known as intelligence. It was exciting because it was the first time we had found genes that accounted for variance in the population in the behavioral sciences. That was 2013. A follow-up to that paper came out in Nature in May 2016. This time it didn't have 120,000 subjects; it had 300,000, and instead of finding 3 genetic variants significant influencing years of education, it found seventy-four genome-wide significant hits. Together, these variants explained 5 percent of the variance within years of education. That is going to be a turning point. You might say, "Well, it's only 5 percent," but 5 percent begins to give you predictability in the real world.
Here's the neat thing. We've taken these results and created what we call a "polygenic score," where you take all of those top associations between DNA variants and years of education and sum them together to create a score for each individual. For this score, you can then ask, "Years of education, that's a pretty rough variable. What about actual tests of school performance?" We're using this right now. We have a paper coming out that shows that our polygenic score explains almost 10 percent of the variance in tests of school performance. These are called GCSE scores in the UK, which are national tests that are administered at the end of compulsory schooling at age sixteen. This polygenic score from the 2016 paper explains almost 10 percent of the variance in GCSE scores.
Those scores are about 60 percent heritable, which we know from our twin studies. There's still a gap, but explaining 10 percent of the variance in the social and behavioral sciences is pretty good going. It's why I say we're at this turning point now where people are going to begin to realize that if you don't believe in genetics, you're going to have to argue with DNA. You can't just say, "The twin study is no good," or "the adoption study is no good." DNA is real.
For the first time, it will allow us to make genetic predictions for an individual. In the past the best you could do with schizophrenia or alcoholism was make a prediction based on family risk. Everybody in the family has the same risk. Here, you can make predictions for an individual. When we take this polygenic score and look at it within families, siblings correlate 50 percent. They're 50 percent similar genetically. Half of the time they'll have the same DNA variant, but half of the time they won't. If you look at this polygenic score, some siblings will be similar and some will be different.
That difference in the polygenic score translates into a difference in the siblings within a family, in terms of their GCSE scores. We're talking about a pretty big difference. These polygenic scores are perfectly normally distributed as a bell-shaped curve. If you split this curve up into seven equal parts and then you take the people at the top septile and the bottom septile, the difference between them in their GCSE scores is one whole grade. It's the difference between getting into university or not. Even though we're only explaining about 10 percent of the variance, it's still enough to make a difference.
When people begin to realize this, it's going to be seen as a real turning point in genetics and the behavioral sciences. What I'm interested in doing now is thinking about how we play this endgame. How do we look at the impact on science? I'm pretty clear on that, but much less clear about where we go in terms of society. It's only 10 percent of the variance, but now might be a good time to get a real discussion going about what we do with this, how we use it, and how we avoid potential abuses of this data.
I'm convinced that with all the DNA companies out there, this is going to happen, whether we like it or not. It's better for us to get ahead of the curve and begin to anticipate the potential, as well as the problems. I'm more of a cheerleader because there is a lot of positive potential for this work. There are lots of doom-mongers out there, so I'm needed as an antidote to all the doom-mongers who say, "Oh, this is just terrible, to be able to predict genetically how people are going to turn out in life."
We're talking about genetic influence and the statistic of heritability. This is simply a statistic describing the effect size. Heritability can be from zero, meaning genetics doesn't account for any differences between people, to 100 percent, where it explains all the differences between people. I should clarify that variance is also a descriptive statistic which describes this normal distribution we usually find. Variance is calculated as differences from the mean of a population. All it means is that if there's a lot of variance, the distribution is spread out, and if there isn't much variance, the distribution looks stacked.
What we're trying to do in behavioral genetics and medical genetics is explain differences. It's important to know that we all share approximately 99 percent of our DNA sequence. If we sequence, as we can now readily do, all of our 3 billion base pairs of DNA, we will be the same at over 99 percent of all those bases. That's what makes us similar to each other. It makes us similar to chimps and most mammals. We're over 90 percent similar to all mammals. There's a lot of genetic similarity that's important from an evolutionary perspective, but it can't explain why we're different. That's what we're up to, trying to explain why some children are reading disabled, or some people become schizophrenic, or why some people suffer from alcoholism, et cetera. We're always talking about differences. The only genetics that makes a difference is that 1 percent of the 3 billion base pairs. But that is over 10 million base pairs of DNA. We're looking at these differences and asking to what extent they cause the differences that we observe. I hesitated over the word "cause" because we don't often use that word with correlations. When DNA correlates with something, like reading ability, it's the only correlation that you can unambiguously interpret causally, because nothing changes your DNA sequence. In the first cell that forms you, you inherit the DNA combined from your mother and father. The DNA that you inherit is what causes the inherited differences that we see in everything.
We pick up mutations as we go along, but identical twins who had the same DNA at birth are as similar to one another in their DNA sequence late in life as you would be with yourself earlier. We all pick up some mutations in the genome along the way, but the DNA that we inherit is transmitted with great fidelity throughout life.
~ ~ ~
The word gene wasn't invented until 1903. Mendel did his work in the mid-19th century. In the early 1900s, when Mendel was rediscovered, people finally realized the impact of what he did, which was to show the laws of inheritance of a single gene. At that time, these Mendelians went around looking for Mendelian 3:1 segregation ratios, which was the essence of what Mendel showed, that inheritance was discreet. Most of the socially, behaviorally, or agriculturally important traits aren't either/or traits, like a single-gene disorder. Huntington's disease, for example, is a single-gene dominant disorder, which means that if you have that mutant form of the Huntington's gene, you will have Huntington's disease. It's necessary and sufficient. But that's not the way complex traits work.
This other group began in England called Fisherians, or Galtonians. They were interested in quantitative trait variation. They dismissed Mendelian stuff, saying it was just something weird in pea plants because, clearly, we're talking about normal distributions. Then in 1918, Fisher figured out that Mendel could be right. But if there were many genes involved, you would get a normal distribution, even if each of those genes worked in the discreet way that Mendel said they did. It's like flipping a coin. You flip the coin and each coin flip is either heads or tails. Then if you add those heads up over 100 flips, you can get a total score, and that score will be normally distributed.
People realized these two views of genetics could come together. Nonetheless, the two worlds split apart because Mendelians became geneticists who were interested in understanding genes. They would take a convenient phenotype, a dependent measure, like eye color in flies, just something that was easy to measure. They weren't interested in the measure, they were interested in how genes work. They wanted a simple way of seeing how genes work.
By contrast, the geneticists studying complex traits—the Galtonians—became quantitative geneticists. They were interested in agricultural traits or human traits, like cardiovascular disease or reading ability, and would use genetics only insofar as it helped them understand that trait. They were behavior centered, while the molecular geneticists were gene centered. The molecular geneticists wanted to know everything about how a gene worked. For almost a century these two worlds of genetics diverged.
It was only in the 1980s that the two worlds started to come together. The molecular geneticists realized that, outside of the few thousand rare single-gene disorders, most of the burden in society—the medical problems, such as cardiovascular disease and just about anything you can mention—isn't like that. It's heritable, but not a single gene. Many genes are involved. The quantitative geneticists became envious of the possibilities of trying to identify specific genes.
In the 1980s, for the first time, we began to assess DNA variants directly in DNA. All we had up until that time was a handful of genetic differences, like blood types. You couldn't go very far in identifying genes when you only had a handful of genes throughout the genome. Once we were able to sequence DNA and look for differences between people, we could then use these new techniques in the 1980s to measure DNA variation directly. That made the molecular geneticists realize they could study more complex traits, and it made the quantitative geneticists realize they could identify genes, even if there are many genes of small effect.
In the last ten years these two worlds of genetics have completely come together. It's technology-driven because a key factor is these DNA chips, which allow you to genotype hundreds of thousands of DNA variants throughout the genome. And because it's miniaturized, you can do it cheaply. A little chip the size of a postage stamp can genotype hundreds of thousands of these DNA variants. Every lab could do this.
The first major study that was done using this genome-wide association approach was for age-related macular degeneration. It had some quick hits in 2006, which made everyone realize this isn't just theoretical. The Wellcome Trust funded a huge consortium of hundreds of researchers that focused on seven common disorders. Most of these were medical disorders, like hypertension or Crohn's disease. But they also included bipolar manic depression—a behavioral disorder. That study brought people together and made them collaborate because they realized they needed big samples to be able to correct for multiple testing and to find genes of small effect. It seems like God was messing with us, in a way.
The first study—macular degeneration—just had about 100 families. They found two big hits and it worked incredibly. Macular degeneration is a common cause of blindness in later life. They found two big hits that explained maybe half of the genetic variance in age-related macular degeneration. What are these genes? They're nothing that anyone had studied before. They were inflammatory pathways. This discovery led to drug trials of anti‑inflammatories as preventive for people at genetic risk for age-related macular degeneration. Everything worked.
But it never happened again. In all the other medical disorders that people studied subsequently, there's never been another example like age-related macular degeneration, where you get a couple of big hits that explain most of the heritability. It would be wonderful if that were the case, but we have tremendous power to be able to conclude decisively that for all these other medical and psychiatric disorders, there are no genes of big effect.
Before, when we were doing what's called linkage studies, explaining 10 percent of the variance, of the liability for a disorder, would have been considered a small effect. Now we're talking about 1 percent as a big effect size. We're talking about risks of 1.05, just about the chance level of 1.0. Not like smoking cigarettes, where you're talking about a tenfold relative risk of lung cancer.
There was a lot of excitement in the whole scientific community. I want to emphasize that this isn't just the behavioral sciences; this is all the life sciences, including the medical sciences. Everybody was collaborating to create as large a sample as possible to detect these smaller effect sizes. However, there were no other results like macular degeneration. There were some solid effects that worked, but they were much smaller effects. No one ever thought the biggest effects could be so small. Then people said, "If that's how nature works, we've got to roll up our sleeves and get bigger samples. Instead of this brute force approach, we need to think about cleverer ways to find genes."
We were only using common variants on these chips. The most informative variants are ones that aren't rare, like 0.1 percent—1 in 1000—because if you have a sample of 10,000, you only have ten people with that variant. The chips used common variants because they're more informative throughout the genome. Now, what if the genetic variants that are important aren't just these common ones? In fact, hindsight being always perfect, you can say these common variants are common, so they can't be that bad.
There's a lot of interest now in looking for these rare variants. Everyone is holding their breath now waiting for the next big thing, which is whole-genome sequencing. You don't just get hundreds of thousands of DNA variants tagging the whole genome, you get all 3 billion base pairs of DNA for each individual. That's the end of the story. That's all you inherit. There're probably several hundred thousand people in the world now who have had their whole genome sequenced. It's thought that in a year or so there's going to be 1 million people who will have had all their 3 billion base pairs sequenced.
We've got to be able to figure out where the so-called missing heritability is, that is, the gap between the DNA variants that we are able to identify and the estimates we have from twin and adoption studies. For example, height is about 90 percent heritable, meaning, of the differences between people in height, about 90 percent of those differences can be explained by genetic differences. With genome-wide association studies, we can account for 20 percent of the variance of height, or a quarter of the heritability of height. That's still a lot of missing heritability, but 20 percent of the variance is impressive.
With schizophrenia, for example, people say they can explain 15 percent of the genetic liability. The jury is still out on how that translates into the real world. What you want to be able to do is get this polygenic score for schizophrenia that would allow you to look at the entire population and predict who's going to become schizophrenic. That's tricky because the studies are case-control studies based on extreme, well-diagnosed schizophrenics, versus clean controls who have no known psychopathology. We'll know soon how this polygenic score translates to predicting who will become schizophrenic or not.
These are powerful new results. Anything powerful in science can have a downside as well as an upside. I like to be a cheerleader for all the positive potential for this work, but now is a good time to have a discussion about the potential misuses. I often run into a knee-jerk reaction from people wanting to know why you would want to be able to predict who's going to have problems, like alcoholism, schizophrenia, or a reading disability. Aside from the usual scientific answer—that we're truth seekers—my answer to that question is that all of medicine has moved from curing problems, which we don't do very well, to preventing problems. So it's looking at it from more of a public health perspective. To prevent problems, you've got to predict them. DNA is the best predictive game in town because it's causal. By being able to predict, we can begin to intervene.
I know several people who don't drink at all because they had an alcoholic parent. That was most persuasive about the evils of alcohol. But that's a family-based risk. Within a family, some siblings won't have nearly the genetic risk of other siblings. Public health work shows that interventions that are specific to an individual will be more effective. If you say, "You can drink as much as your sibling, but you have a risk of alcoholism that's three times greater than your sibling," that gets people's attention. It could also help us begin to intervene to prevent the alcoholism. That's a low-tech solution. If you don't drink, you won't become alcoholic.
It's similar with schizophrenia. It's been shown that if you can forestall the first schizophrenic episode, or ameliorate some of the symptoms, say, with cognitive behavioral therapy, you can make subsequent episodes less severe. You won't cure schizophrenia, but you can make the long-term prognosis better.
It's the same with reading disabilities. It's a preventive, predictive approach. We know that, unlike schizophrenia and alcoholism, even if you wait until kids are diagnosed as reading disabled at school, you can still do something about it. But by the time you find out they're having a lot of trouble reading, it's like Humpty Dumpty falling off the wall; it's hard to put them back together again. There's a lot of collateral damage. They think they're not any good because reading is so central. Almost every child who has problems learning to read at school had language problems earlier. There are good language intervention programs. If you could predict which children will have these problems, that's a plausible way of intervening early to prevent the problems. Why not just do that for all kids? Well, the answer to that is that interventions that work, especially in the behavioral sciences, aren't cheap magic bullets. They usually involve pretty intensive, expensive interventions.
That's one answer to the question about why we want to predict problems genetically. Scientifically, there are good reasons why you might want to think about everyone having their genome sequenced. For people in doubt about this, I'd recommend a book co-edited by Francis Collins, which is several years old now called Genomic Medicine. Basically, he makes the case for how useful this approach to identifying genes can be. If people don't know, he was the head of the Human Genome Project and is now director of the National Institutes of Health in the United States. In the book, he says that in the next few years all newborns will have their genome sequenced, and further, looking back on it, we'll say how unethical it was not to have done that. We only screen now for genetic problems like PKU—phenylketonuria. We do genetic testing of just about all babies in the world with blood from a heel prick. Other than that, the only single-gene testing that you would have done is when someone in the family already has the problem. If you have one child with a genetic disease and you get pregnant, you would look to see if this other child has it. That's incredibly ineffective because most of these disorders are recessive, so they don't show up in the parents or in most of the children; it's one in four children for a recessive disease in the most typical case in which both parents are carriers of one copy of the allele but do not display the disorder because two copies are needed.
For the price of doing a couple of those gene tests, you could do the whole genome sequence. I'm with Francis Collins on that. It'll raise problems, but to be able to identify all the single-gene disorders would be great. From my perspective, it would be great scientifically because it would mean you wouldn't have to collect DNA, you wouldn't have to do any genotyping if the sequence is there on a little memory stick. That's all it would take. Even for complex traits like schizophrenia or alcoholism, there's a lot of merit in being able to predict.
Of course there are downsides that people should discuss, like labeling. But in schools, with reading disability and behavioral problems, there's a lot of labeling that goes on anyway. You can call them robins and bluebirds, but the kids figure out pretty quickly which group has the reading problems, or the math problems, or the behavioral problems.
Bringing this back to my research, which is primarily education and learning abilities, such as reading and mathematics abilities, I have had a lot of trouble convincing people in education that genetics could be important, which just blows my mind.
If you look at the books and the training that teachers get, genetics doesn't get a look-in. Yet if you ask teachers, as I've done, about why they think children are so different in their ability to learn to read, and they know that genetics is important. When it comes to governments and educational policymakers, the knee-jerk reaction is that if kids aren't doing well, you blame the teachers and the schools; if that doesn't work, you blame the parents; if that doesn't work, you blame the kids because they're just not trying hard enough. An important message for genetics is that you've got to recognize that children are different in their ability to learn. We need to respect those differences because they're genetic. Not that we can’t do anything about it.
It's like obesity. The NHS is thinking about charging people to be fat because, like smoking, they say it's your fault. Weight is not as heritable as height, but it's highly heritable. Maybe 60 percent of the differences in weight are heritable. That doesn't mean you can't do anything about it. If you stop eating, you won't gain weight, but given the normal life in a fast-food culture, with our Stone Age brains that want to eat fat and sugar, it's much harder for some people.
We need to respect the fact that genetic differences are important, not just for body mass index and weight, but also for things like reading disability. I know personally how difficult it is for some children to learn to read. Genetics suggests that we need to have more recognition that children differ genetically, and to respect those differences. My grandson, for example, had a great deal of difficulty learning to read. His parents put a lot of energy into helping him learn to read. We also have a granddaughter who taught herself to read. Both of them now are not just learning to read but reading to learn.
Genetic influence is just influence; it's not deterministic like a single gene. At government levels—I've consulted with the Department for Education—I don't think they're as hostile to genetics as I had feared, they're just ignorant of it. Education just doesn't consider genetics, whereas teachers on the ground can't ignore it. I never get static from them because they know that these children are different when they start. Some just go off on very steep trajectories, while others struggle all the way along the line. When the government sees that, they tend to blame the teachers, the schools, or the parents, or the kids. The teachers know. They're not ignoring this one child. If anything, they're putting more energy into that child.
It's important to recognize and respect genetically driven individual differences. It's better to make policy based on knowledge than on fiction. A lot of what I see in education is fiction. In education, part of the reason people shy away from genetics is because they think it's associated with a right-wing agenda. It's so important to emphasize that scientific facts are neutral. It's the values that you apply to them that should determine policy.
If there are, as I'm certain there are, strong genetic influences on individual differences in learning to read, a right-wing agenda might say, "We could save a lot of money by just putting money into the very best kids, because it won't take much and they'll go sailing off." It's a silly policy because you don't need many Newtons to create calculus or the big advances we've had in science, but a society depends on intellectual capital, which involves much broader intellectual infrastructure than a few geniuses. My values suggest the opposite from the right-wing agenda. It's called the Finnish model in education. It's the idea of saying, in a technologically advanced society we need to ensure that all citizens reach some minimal level of numeracy and literacy. We need to put the resources into the lower end to make sure they don't fall off the low end of the bell curve. To participate in society you need a certain level of literacy and numeracy. You can take the same data—that genetics is important—and your policies, depending on your values, could be very different.
Those are the big issues that I am confronted with when I talk to people in education. On the whole, I don't even bother talking to them about DNA because we're still at a level where even considering the possibility that differences between children in their ability to learn could be genetically influenced is like clinical psychology thirty years ago. If you talked about genetics, they hated it. They'd say, "Well, that's the end of clinical psychology. If it's genetic, we can't do anything about it." You'd say, "No, no, no, that's wrong." In fact, by identifying genetic differences, you might be able to create therapies that work especially well for certain people. It's the same thing in education.
Education is the last backwater of anti-genetic thinking. It's not even anti-genetic. It's as if genetics doesn't even exist. I want to get people in education talking about genetics because the evidence for genetic influence is overwhelming. The things that interest them—learning abilities, cognitive abilities, behavior problems in childhood—are the most heritable things in the behavioral domain. Yet it's like Alice in Wonderland. You go to educational conferences and it's as if genetics does not exist.
I'm wondering about where the DNA revolution will take us. If we are explaining 10 percent of the variance of GCSE scores with a DNA chip, it becomes real. People will begin to use it. It's important that we begin to have this conversation. I'm frustrated at having so little success in convincing people in education of the possibility of genetic influence. It is ignorance as much as it is antagonism.