Here's a (Swiftian modest) suggestion for evaluating academic research, maintaining quality, and speeding up the process. Send 2 papers to an anonymous referee. Have them pick the best one. Only! Pair the winner with the winner of some other pair. Repeat as needed. Notify the losing authors only of what round they lost in, and permit resubmissions.
This idea has been running through my head since the mid 1990s. It's hard to convince people that this would be worthwhile, but could it really be worse than the current system (with all of its well known biases)?
The acceptance rate at top journals hovers in the low single digits. This could be matched by a starting pool of 32 papers. This would require 31 pair tests. This is in the range of the size of the editorial board of major journals. If you asked this group to each evaluate one pair a week, you'd have a good size journal by the end of the year. I bet they'd be happier with the workload.
But, would it be any good? That's open to debate, but I'm inclined to think it would be. For one thing, the metric of evaluation here is much sharper than the qualitative judgments currentlly made by referees and editors. And, I have a lot of faith in the ability of repeated testing to find errors. What you're really worrying about if you doubt this will work is the rate of Type I and Type II errors (i.e., choosing lousy winners, or abandoning losers that are in fact solid). Repeated testing is well understood in statistics, and the probability of Type I and Type II errors goes fairly rapidly towards zero with more rounds.
And, I hate to admit it, but a lot of what passes for academic research just stinks. Further, a lot of it can be fairly labeled as cr*p rather quickly. For example, I have a former Ph.D. student who has rapidly abandoned everything he learned in graduate school. He is now having trouble publishing, and he asked me to evaluate a working paper. I asked him how many errors I should find before I should give it up and send it back. He said ten (can you imagine, ten errors is tolerable?). I found ten on the title and abstract page in a few minutes and sent them back. He shouldn't wonder why he can't publish.
Lastly, if all you told losers was the round in which they lost, the onus would be on them to figure out where the mistakes were, and to arrange for someone to help them if need be.
Arnold Kling at EconLog discusses some alternatives in a post entitled "Revolution In Academic Affairs" at EconLog.
Comments