Behavior Genetics IQ Statistics

Experts Weigh In

Experts are not always helpful, especially when they are experts on other topics. Richard Hamming, inventor of Hamming Codes, has ideas about intelligence:

We will now take up an example where a definition still bothers us, namely IQ. It is as circular as you could wish. A test is made up which is supposed to measure “intelligence”, it is revised to make it as consistent internally as we can, and then it is declared, when calibrated by a simple method, to measure “intelligence” which is now normally distributed (via the calibration curve).All definitions should be inspected, not only when first proposed, but much later when you see how they are going to enter into the conclusions drawn. To what extent were the definitions framed as they were to get the desired result? How often were the definitions framed under one condition and are now being applied under quite different conditions? All too often these are true! … Brains are nice to have, but many people who seem not to have great IQs have done great things.

The Art of Doing Science and Engineering (1997)

When you spend many years at Bell Labs, sharing an office with Claude Shannon while he invents Information Theory, it is not surprising that restriction of range prevents you from appreciating deficits in ability.

IQ pioneers have wrestled long and hard with the definitions they employ, a history Hamming seems not to be aware of. Not only were they competent statisticians, they invented many of the techniques commonly used today. Galton coined the term “Normal Distribution” and invented regression and correlation techniques for bivariate normal variables, Karl Pearson generalized them to (most) distributions, Cyril Burt and Charles Spearman invented Factor Analysis, and so on.

The assumption of normality has strong support from the Central Limit Theorem once you realize that IQ is polygenic, and is in any case merely convenient. Contrary to Hamming’s suspicion, no important facts depend on the assumption of normality. IQ will not be more or less heritable if you change the distribution to one with fatter or thinner tails, or skew it. If you are prepared to lose efficiency and think it is worth your while you can instead use non-parametric methods like the bootstrap. People have not done that because it would gain them next to nothing of importance, not because they do not understand the issues. See for example the long discussion about normality by Arthur Jensen in Bias in Mental Testing (1980). As he points out the assumption of normality is almost certainly false, as it usually is in other fields, but modest departures from normality like slightly fatter left tails (due to harmful mutations) are not worth losing sleep over.

Wading in, boots and all, to other fields is not necessarily a bad idea for statisticians and other experts. It may even be helpful. See for example David Bartholomew’s helpful Measuring Intelligence (2004). But usually it backfires, as in the case of Bernie Devlin et al Intelligence, Genes, and Success: Scientists Respond to The Bell Curve (1997). They would help behavior geneticists figure out bread-and-butter ideas like heritability. Instead they triumphantly produce a slightly lower estimate of narrow-sense heritability (0.39) by front-loading their sample of twins with adolescents. Heritability increases with age, see Plomin et al Behavior Genetics (2017). Far from improving the techniques used, Devlin et al shed darkness where there was light.

Leave a Reply