Wow. Bryan Caplan points out that economists who measure the returns from schooling are aware that individual ability is an important determinant, but suppress correcting for its effects because it is measured badly.
This is like not using batting average (at all) to compare baseball players because there is much more to the game.
Repeating the paraphrase of Orwell “Some ideas are so stupid that only intellectuals believe them.”*
Here’s Bryan’s result:
… If you mention ability bias, however, labor economists will quickly point you to a massive literature that supposedly debunks it.
But if you pay close attention, there's a bizarre omission. Despite their mighty debunking efforts, labor economists almost never test for ability bias in the most obvious way: Measure ability, then re-estimate the return to education after controlling for measured ability. For example, you could measure IQ, then estimate the return to education after controlling for IQ.
When I ask labor economists about their omission, they have a puzzling response: "IQ is a very incomplete measure of ability." True enough. But the right lesson to draw is that controlling for IQ provides a lower bound for the severity of ability bias. After all, if the estimated return to education falls sharply after controlling for just one measure of ability, imagine how much it might fall after controlling for measures of all ability.
What happens to the return to education after controlling for IQ? I've done the statistics myself on the NLSY, and found that the estimated return to education falls by about 40% …
So, the reported returns to education — how much extra one gets paid for more schooling — are almost double the corrected estimates because they’re ignoring the basic observation that smart people are likely to get paid more anyway.
The flip side of this is that the value of teachers and schools is overstated by ignoring the quality of the students put into the system.
Gee … do’ya think there’s any grant funding out there interested in establishing and promoting that result?
Statistically, Bryan is pointing out that conventional parameter estimates are biased upward, but he’s missing a second problem. The big three problems in statistics are bias, consistency, and efficiency — and the conventional estimates are not just biased, they’re also inefficient.
Inefficiency is a lot more subtle than bias. Frankly, I don’t think I understood it until about 8 years into my academic career when I had to explain in seminars to finance Ph.D.’s why they should listen to an economics Ph.D. about the problem with one of their techniques.
In short, an inefficient estimator is one that’s just dumb: like forecasting the weather without looking at the sky. There’s a lot more to it than that, but it means that you’re not doing something which might be helpful.
One way to think about this is that bias is about parameter estimates that are off in one direction or the other, while efficiency is about standard error estimates that are off in one direction or the other.
In practice, this suggests that not only are the typical estimates of the returns to education biased upwards, but they probably reported to be a lot more precise than they actually are. That is, Bryan is pointing out that the parameter estimates are biased upward, and I’m pointing out that the standard error estimates are probably biased downwards.
Bias and inefficiency also have distributional consequences for the conventional results. Because those results are based on leaving something out, they suggest that the things that are left in are more important than they would be in a better model. The thing is, you don’t know which of your variables is more or less seriously affected until you run the better model. Maybe you’re lucky, and the variable of interest is the one that’s least affected … and maybe you’re not.