For an empiricist, science brings many surprises. It has continued to change my thinking about many phenomena by challenging my presumptions about them. Among the first of my assumptions to be felled by evidence was that career choice proceeds in adolescence by identifying one's most preferred options; it actually begins early in childhood as a taken-for-granted process of eliminating the least acceptable from further consideration. Another mistaken presumption was that different abilities would be important for performing well in different occupations. The notion that any single ability (e.g., IQ or g) could predict performance to an appreciable degree in all jobs seemed far-fetched the first time I heard it, but that's just what my own attempt to catalog the predictors of job performance would help confirm. My root error had been to assume that different cognitive abilities (verbal, quantitative, etc.) are independent—in today's terms, that there are "multiple intelligences." Empirical evidence says otherwise.
The most difficult ideas to change are those which seem so obviously true that we can scarcely imagine otherwise until confronted with unambiguous disconfirmation. For example, even behavior geneticists had long presumed that non-genetic influences on intelligence and other human traits grow with age while genetic ones weaken. Evidence reveals the opposite for intelligence and perhaps other human traits as well: heritabilities actually increase with age. My attempt to explain the evolution of high human intelligence has also led me to question another such "obvious truth," namely, that human evolution ceased when man took control of his environment. I now suspect that precisely the opposite occurred. Here is why.
Human innovation itself may explain the rapid increase in human intelligence during the last 500,000 years. Although it has improved the average lot of mankind, innovation creates evolutionarily novel hazards that put the less intelligent members of a group at relatively greater risk of accidental injury and death. Consider the first and perhaps most important human innovation, the controlled use of fire. It is still a major cause of death worldwide, as are falls from man-made structures and injuries from tools, weapons, vehicles, and domesticated animals. Much of humankind has indeed escaped from its environment of evolutionary adaptation (EEA), but only by fabricating new and increasingly complicated physical ecologies. Brighter individuals are better able not only to extract the benefits of successive innovations, but also to avoid the novel threats to life and limb that they create. Unintentional injuries and deaths have such a large chance component and their causes are so varied that we tend to dismiss them as merely "accidental," as if they were uncontrollable. Yet all are to some extent preventable with foresight or effective response, which gives an edge to more intelligence individuals. Evolution requires only tiny such differences in odds of survival in order to ratchet up intelligence over thousands of generations. If human innovation fueled human evolution in the past, then it likely still does today.
Another of my presumptions bit the dust, but in the process exposed a more fundamental, long-brewing challenge to my thinking about scientific explanation. At least in the social sciences, we seek big effects when predicting human behavior, whether we are trying to explain differences in happiness, job performance, depression, health, or income. "Effect size" (percentage of variance explained, standardized mean difference, etc.) has become our yardstick for judging the substantive importance of potential causes. Yet, while strong correlations between individuals' attributes and their fates may signal causal importance, small correlations do not necessarily signal unimportance.
Evolution provides an obvious example. Like the house in a gambling casino, evolution realizes big gains by playing small odds over myriad players and long stretches of time. The small-is-inconsequential presumption is so ingrained and reflexive, however, that even those of us who seek to explain the evolution of human intelligence over the eons have often rejected hypothesized mechanisms (say, superior hunting skills) when they could not explain differential survival or reproductive success within a single generation.
IQ tests provide a useful analogy for understanding the power of small but consistent effects. No single IQ test item measures intelligence well or has much predictive power. Yet, with enough items, one gets an excellent test of general intelligence (g) from only weakly g-loaded items. How? When test items are considered one by one, the role of chance dominates in determining who answers the item correctly. When test takers' responses to many such items are added together, however, the random effects tend to cancel each other out, and g's small contribution to all answers piles up. The result is a test that measures almost nothing but g.
I have come to suspect that some of the most important forces shaping human populations work in this inconspicuous but inexorable manner. When seen operating in individual instances, their impact is so small as to seem inconsequential, yet their consistent impact over events or individuals produces marked effects. To take a specific example, only the calculus of small but consistent tendencies in health behavior over a lifetime seems likely to explain many demographic disparities in morbidity and mortality, not just accidental death.
Developing techniques to identify, trace, and quantify such influences will be a challenge. It currently bedevils behavior geneticists who, having failed to find any genes with substantial influence on intelligence (within the normal range of variation), are now formulating strategies to identify genes that may account for at most only 0.5% of the variance in intelligence.