Categories
Behavior Genetics Eugenics IQ

Eugenic Taint

There is no more tired or trite argument than that some or other topic is tainted by association with eugenics. For instance, research into intelligence or behavior genetics in general. These aim to describe the world as it is rather than as it ought to be. Eugenics is a policy which attempts to alter the world, to fashion how it ought to be (did you need to be told that?) It was connected to genetics in much the same way that hygiene (a policy) was and is connected to medicine, which is why eugenics was described as “racial hygiene”. However you can describe the world without trying to change it, even if that logical distinction has been clouded by the “social justice” activism enthralling university campuses.

This argument can go to breathtakingly ludicrous lengths, as in the idea that “frequentist” statistics is somehow tainted by the great statistician R. A. Fisher, who happened to be a staunch eugenicist at the same time. (It is worth pausing over the depressing fact that contemporary academics feel compelled to keep a straight face while discussing childish arguments, instead of simply laughing them out of the room.) As the entire scientific establishment before 1939 was in favour of eugenics as a policy, the scope is more or less unlimited for detecting, or not detecting, the eugenic taint anywhere the fancy of the witch-sniffer directs. Any field is a candidate. The mediocrities churned out by the academic-research complex must pursue politics over science to make a living in the fields of pretense. Here the unwholesome process led to statistics and Ronald Aylmer Fisher.

RA Fisher, statistician and eugenicist

The essential idea here has been reused many times in bad science fiction movies, as in The Hands of Orlac (1924). The celebrated pianist Orlac loses both hands in an accident and has new hands grafted on. But they are the hands of a murderer! Aaaargh! Chop off those frequentist hands! They are the hands of a eugenicist!

The Hands of Orlac
The Hands of Orlac!

Categories
Behavior Genetics IQ Statistics

Experts Weigh In

Experts are not always helpful, especially when they are experts on other topics. Richard Hamming, inventor of Hamming Codes, has ideas about intelligence:

We will now take up an example where a definition still bothers us, namely IQ. It is as circular as you could wish. A test is made up which is supposed to measure “intelligence”, it is revised to make it as consistent internally as we can, and then it is declared, when calibrated by a simple method, to measure “intelligence” which is now normally distributed (via the calibration curve).All definitions should be inspected, not only when first proposed, but much later when you see how they are going to enter into the conclusions drawn. To what extent were the definitions framed as they were to get the desired result? How often were the definitions framed under one condition and are now being applied under quite different conditions? All too often these are true! … Brains are nice to have, but many people who seem not to have great IQs have done great things.

The Art of Doing Science and Engineering (1997)

When you spend many years at Bell Labs, sharing an office with Claude Shannon while he invents Information Theory, it is not surprising that restriction of range prevents you from appreciating deficits in ability.

IQ pioneers have wrestled long and hard with the definitions they employ, a history Hamming seems not to be aware of. Not only were they competent statisticians, they invented many of the techniques commonly used today. Galton coined the term “Normal Distribution” and invented regression and correlation techniques for bivariate normal variables, Karl Pearson generalized them to (most) distributions, Cyril Burt and Charles Spearman invented Factor Analysis, and so on.

The assumption of normality has strong support from the Central Limit Theorem once you realize that IQ is polygenic, and is in any case merely convenient. Contrary to Hamming’s suspicion, no important facts depend on the assumption of normality. IQ will not be more or less heritable if you change the distribution to one with fatter or thinner tails, or skew it. If you are prepared to lose efficiency and think it is worth your while you can instead use non-parametric methods like the bootstrap. People have not done that because it would gain them next to nothing of importance, not because they do not understand the issues. See for example the long discussion about normality by Arthur Jensen in Bias in Mental Testing (1980). As he points out the assumption of normality is almost certainly false, as it usually is in other fields, but modest departures from normality like slightly fatter left tails (due to harmful mutations) are not worth losing sleep over.

Wading in, boots and all, to other fields is not necessarily a bad idea for statisticians and other experts. It may even be helpful. See for example David Bartholomew’s helpful Measuring Intelligence (2004). But usually it backfires, as in the case of Bernie Devlin et al Intelligence, Genes, and Success: Scientists Respond to The Bell Curve (1997). They would help behavior geneticists figure out bread-and-butter ideas like heritability. Instead they triumphantly produce a slightly lower estimate of narrow-sense heritability (0.39) by front-loading their sample of twins with adolescents. Heritability increases with age, see Plomin et al Behavior Genetics (2017). Far from improving the techniques used, Devlin et al shed darkness where there was light.