On 13 December 1994, in the aftermath of the publication of The Bell Curve by Richard J. Herrnstein and Charles Murray, a group of 52 scientists, led by the courageous intelligence researcher (and the greatest nemesis of political correctness) Linda S. Gottfredson, published a joint statement entitled “Mainstream Science on Intelligence” in the Wall Street Journal. Virtually all of the controversy surrounding The Bell Curve was based on the public misunderstanding of the concept of intelligence and intelligence testing, so Gottfredson wanted to correct the public misconception about intelligence in her statement. Nearly 15 years later, however, I am sad to see that most of the misconceptions are still widely held by the general public.
Gottfredson’s 1994 statement contained 25 conclusions drawn from decades of intelligence research, which all 52 intelligence researchers endorsed. To my knowledge, none of the 25 conclusions have since been overturned by subsequent research in the last 15 years. All 25 conclusions have stood the test of time, and all subsequent studies in intelligence research have only confirmed and supported them more strongly.
There are two particular misconceptions that poison public discourse about intelligence. The first is that intelligence tests are culturally biased. As Gottfredson’s statement points out, intelligence tests are not culturally or otherwise biased against any racial, ethnic, cultural, or social class group. Intelligence tests are among the most accurate and predictive of all psychometric tests; for example, psychometricians can measure individuals’ intelligence much more accurately than their personality. And intelligence test scores predict academic and job performance of individuals of all racial and ethnic groups equally well.
It does not mean, however, that intelligence tests are always 100% accurate at all times or that the same individual always gets the same IQ score on every test. In the statistical jargon, intelligence tests still have some random measurement errors (or “noise”), so it doesn’t always measure every individual’s intelligence with perfect accuracy, and, as a result, the same individual may score slightly differently (but only slightly) on different tests or the same tests on different occasions. However, intelligence tests do not have any systematic measurement errors (or “bias”). They are not more accurate measures of intelligence for some groups of individuals than others. An ordinary bathroom scale has random measurement errors; you don’t always get the same reading every time you step on it (even if your weight is exactly the same). But it is not systematically biased against fat or skinny people.
The high validity and reliability of intelligence tests, and the absence of bias against any group, means, among other things, that there is no difference between one’s “intelligence” and measured “IQ.” IQ is simply a measure of one’s intelligence, in the same sense that what the bathroom scale reads when you stand on top of it is your weight. There is no meaningful difference between your true weight and what your bathroom scale says. There is no meaningful difference between your true intelligence and what you score on an IQ test.
The second common misconception is that intelligence test scores can be easily manipulated by environmental factors. Intelligence is among the most heritable of all human traits. Heritability of a trait is the proportion of the variance in the trait that is attributable to genetic differences. Heritability of intelligence typically ranges from .4 to .8. It increases as individuals get older; the heritability of intelligence is closer to .4 during childhood and is closer to .8 in adulthood. (Yes, genes become more important, and the environment becomes less important, for determining your intelligence as you get older. I’ll leave it to the reader to figure out why this may be.)
The adult heritability of .8 for intelligence means that genes determine 80% of individual differences in intelligence among adults, and non-genetic, environmental factors determine the remaining 20%. The environmental factors include prenatal factors inside the womb, as well as early childhood experiences (mostly health and nutrition). Gottfredson’s statement, published in 1994, states “IQs do gradually stabilize during childhood, however, and generally change little thereafter.” A 2004 longitudinal study of a Scottish sample, conducted by an Edinburgh team of scientists led by Ian J. Deary, shows that intelligence changes very little after the age of 11. The correlation between IQ measured at age 11 and IQ measured at age 80 is .73.
The little-known fact that intelligence remains stable after childhood means, among other things, that there is very little individuals can do in their adolescence and adulthood to increase their intelligence. One cannot increase one’s intelligence by studying, by reading books, by receiving education, or by going to better schools. There is a strong positive association between intelligence and education across individuals, not because further education increases one’s intelligence, but because more intelligent individuals receive more education. By the time you are 10 or 11, your intelligence is more or less set for the rest of your life, and it’s largely up to your genes.
The last item in Gottfredson’s statement (Item #25) concerns “implications for social policy.” It shows why Gottfredson is the original scientific fundamentalist, and why she is one of my intellectual heroes. It reads:
25. The research findings neither dictate nor preclude any particular social policy, because they can never determine our goals. They can, however, help us estimate the likely success and side-effects of pursuing those goals via different means.
I couldn’t have said it better my own damn self. If you are interested in reading the entire statement yourself, it is available here.